Changeset cfc6f3c7 in sasview
- Timestamp:
- Apr 5, 2017 5:21:31 AM (8 years ago)
- Branches:
- master, ESS_GUI, ESS_GUI_Docs, ESS_GUI_batch_fitting, ESS_GUI_bumps_abstraction, ESS_GUI_iss1116, ESS_GUI_iss879, ESS_GUI_iss959, ESS_GUI_opencl, ESS_GUI_ordering, ESS_GUI_sync_sascalc, costrafo411, magnetic_scatt, release-4.2.2, ticket-1009, ticket-1094-headless, ticket-1242-2d-resolution, ticket-1243, ticket-1249, ticket885, unittest-saveload
- Children:
- 69400ec
- Parents:
- 270c882b (diff), a2e980b (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent. - Files:
-
- 3 added
- 81 edited
- 1 moved
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified .travis.yml ¶
red8f27e7 r4636f57 2 2 3 3 language: python 4 python: 5 - "2.7" 4 5 matrix: 6 include: 7 - os: linux 8 env: 9 - PY=2.7 10 - NUMPYSPEC=numpy 11 - os: osx 12 language: generic 13 env: 14 - PY=2.7 15 - NUMPYSPEC=numpy 16 6 17 # whitelist 7 18 branches: 8 19 only: 9 20 - master 10 # command to install dependencies 11 virtualenv: 12 system_site_packages: true 21 22 addons: 23 apt: 24 packages: 25 - opencl-headers 26 - fglrx 27 - libblas-dev 28 - libatlas-dev 29 - libatlas-base-dev 30 - liblapack-dev 31 - gfortran 32 - libhdf5-serial-dev 33 13 34 before_install: 14 - 'if [ $TRAVIS_PYTHON_VERSION == "2.7" ]; then sudo apt-get update;sudo apt-get install python-matplotlib libhdf5-serial-dev python-h5py fglrx opencl-headers python-pyopencl gfortran libblas-dev liblapack-dev libatlas-dev; fi' 35 - echo $TRAVIS_OS_NAME 36 - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then 37 wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; 38 sudo apt-get update; sudo apt-get install python-pyopencl; 39 elif [[ "$TRAVIS_OS_NAME" == "osx" ]]; then 40 wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh; 41 fi 42 43 - bash miniconda.sh -b -p $HOME/miniconda 44 - export PATH="$HOME/miniconda/bin:$PATH" 45 - hash -r 46 - conda update --yes conda 47 48 # Useful for debugging any issues with conda 49 - conda info -a 50 51 # could install other dependencies, but they're locked to specific 52 # versions in build/requirements.txt 53 - conda install --yes python=$PY $NUMPYSPEC scipy cython pylint wxpython 15 54 16 55 install: 17 56 - pip install -r build_tools/requirements.txt 57 - pip install matplotlib 18 58 19 before_script:20 - "export DISPLAY=:99.0" 21 - "sh -e /etc/init.d/xvfb start" 22 - sleep 3 # give xvfb some time to start 23 59 #before_script: 60 # - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then 61 # "export DISPLAY=:99.0"; "sh -e /etc/init.d/xvfb start"; sleep 3; # give xvfb some time to start 62 # fi 63 24 64 script: 25 - export WORKSPACE=/home/travis/build/SasView/ 26 - cd $WORKSPACE 65 - cd .. 66 # this should be the directory above the sasview directory, where we want to 67 # clone the sasmodels 68 - export WORKSPACE=$(pwd) 27 69 - git clone --depth=50 --branch=master https://github.com/SasView/sasmodels.git sasmodels 28 - export PYTHONPATH=$WORKSPACE/sasview-install:$WORKSPACE/utils:$PYTHONPATH 29 - cd $WORKSPACE 70 71 # required for documentation 72 - git clone --depth=50 --branch=master https://github.com/bumps/bumps.git 73 30 74 - ls -ltr 31 75 - if [ ! -d "utils" ]; then mkdir utils; fi 32 76 - /bin/sh -xe sasview/build_tools/travis_build.sh 33 # - /bin/sh -xe sasview/build_tools/jenkins_linux_test.sh34 77 - export LC_ALL=en_US.UTF-8 35 78 - export LANG=en_US.UTF-8 -
TabularUnified INSTALL.txt ¶
rd09f0ae1 r26c9b85 2 2 ================================ 3 3 4 Note - at the current time sasview will only run in gui form under Python 2. 4 5 5 The build works in the usualy pythonic way: 6 Before trying to install and run sasview you'll need to check what 7 dependencies are required: 8 9 $ python check_packages.py 10 11 Many of these are available from PyPi, but some (e.g. h5py) may require more 12 involvement to build and install. If you use the conda package manager then 13 many of the pre-built dependencies are available there. This may be the easiest 14 route if you are on windows. 15 16 The build works in the pythonic way: 6 17 7 18 $ python setup.py build # will build the package underneath 'build/' 8 $ python setup.py install # will install the package 9 19 $ python setup.py install # will install the package into site-packages 10 20 11 21 … … 14 24 $ python run.py # will run the code in place (building the C code once, if required) 15 25 26 On OSX or windows you may need to use: 16 27 17 18 To check all dependencies are met: 19 20 $ python deps.py 21 $ python check_packages.py 22 23 Both tell you different parts of the story, unfortunately. 24 28 $ pythonw run.py 25 29 26 30 -
TabularUnified LICENSE.TXT ¶
r7c05b63 ra3e3ef5 1 Copyright (c) 2009-201 6, SasView Developers1 Copyright (c) 2009-2017, SasView Developers 2 2 All rights reserved. 3 3 -
TabularUnified build_tools/travis_build.sh ¶
r68e6ac8 r8374dc0 1 # Simplified build for Travi cCI1 # Simplified build for Travis CI 2 2 # No documentation is built 3 3 export PATH=$PATH:/usr/local/bin/ … … 38 38 cd $WORKSPACE/sasview 39 39 $PYTHON setup.py clean 40 $PYTHON setup.py build docs bdist_egg 40 # $PYTHON setup.py build docs bdist_egg 41 $PYTHON setup.py bdist_egg 41 42 42 43 # INSTALL SASVIEW -
TabularUnified check_packages.py ¶
r131d94b rf433e6a 2 2 Checking and reinstalling the external packages 3 3 """ 4 import os 4 from __future__ import print_function 5 5 6 import sys 6 7 … … 14 15 sys.modules['Image'] = PIL.Image 15 16 17 if sys.version_info[0] > 2: 18 print("To use the sasview GUI you must use Python 2\n") 16 19 17 20 common_required_package_list = { 18 'setuptools': {'version':'0.6c11','import_name':'setuptools','test':'__version__'},19 'pyparsing': {'version':'1.5.5','import_name':'pyparsing','test':'__version__'},20 'html5lib': {'version':'0.95','import_name':'html5lib','test':'__version__'},21 'reportlab': {'version':'2.5','import_name':'reportlab','test':'Version'},22 'h5py': {'version':'2.5','import_name':'h5py','test':'__version__'},23 'lxml': {'version':'2.3','import_name':'lxml.etree','test':'LXML_VERSION'},24 'PIL': {'version':'1.1.7','import_name':'Image','test':'VERSION'},25 'pylint': {'version':None,'import_name':'pylint','test':None},26 'periodictable': {'version':'1.3.0','import_name':'periodictable','test':'__version__'},27 'bumps': {'version':'0.7.5.9','import_name':'bumps','test':'__version__'},28 'numpy': {'version':'1.7.1','import_name':'numpy','test':'__version__'},29 'scipy': {'version':'0.18.0','import_name':'scipy','test':'__version__'},30 'wx': {'version':'2.8.12.1','import_name':'wx','test':'__version__'},31 'matplotlib': {'version':'1.1.0','import_name':'matplotlib','test':'__version__'},32 'xhtml2pdf': {'version':'3.0.33','import_name':'xhtml2pdf','test':'__version__'},33 'sphinx': {'version':'1.2.1','import_name':'sphinx','test':'__version__'},34 'unittest-xml-reporting': {'version':'1.10.0','import_name':'xmlrunner','test':'__version__'},35 'pyopencl': {'version':'2015.1','import_name':'pyopencl','test':'VERSION_TEXT'},21 'setuptools': {'version': '0.6c11', 'import_name': 'setuptools', 'test': '__version__'}, 22 'pyparsing': {'version': '1.5.5', 'import_name': 'pyparsing', 'test': '__version__'}, 23 'html5lib': {'version': '0.95', 'import_name': 'html5lib', 'test': '__version__'}, 24 'reportlab': {'version': '2.5', 'import_name': 'reportlab', 'test': 'Version'}, 25 'h5py': {'version': '2.5', 'import_name': 'h5py', 'test': '__version__'}, 26 'lxml': {'version': '2.3', 'import_name': 'lxml.etree', 'test': 'LXML_VERSION'}, 27 'PIL': {'version': '1.1.7', 'import_name': 'Image', 'test': 'VERSION'}, 28 'pylint': {'version': None, 'import_name': 'pylint', 'test': None}, 29 'periodictable': {'version': '1.3.0', 'import_name': 'periodictable', 'test': '__version__'}, 30 'bumps': {'version': '0.7.5.9', 'import_name': 'bumps', 'test': '__version__'}, 31 'numpy': {'version': '1.7.1', 'import_name': 'numpy', 'test': '__version__'}, 32 'scipy': {'version': '0.18.0', 'import_name': 'scipy', 'test': '__version__'}, 33 'wx': {'version': '2.8.12.1', 'import_name': 'wx', 'test': '__version__'}, 34 'matplotlib': {'version': '1.1.0', 'import_name': 'matplotlib', 'test': '__version__'}, 35 'xhtml2pdf': {'version': '3.0.33', 'import_name': 'xhtml2pdf', 'test': '__version__'}, 36 'sphinx': {'version': '1.2.1', 'import_name': 'sphinx', 'test': '__version__'}, 37 'unittest-xml-reporting': {'version': '1.10.0', 'import_name': 'xmlrunner', 'test': '__version__'}, 38 'pyopencl': {'version': '2015.1', 'import_name': 'pyopencl', 'test': 'VERSION_TEXT'}, 36 39 } 37 40 win_required_package_list = { 38 'comtypes': {'version':'0.6.2','import_name':'comtypes','test':'__version__'},39 'pywin': {'version':'217','import_name':'pywin','test':'__version__'},40 'py2exe': {'version':'0.6.9','import_name':'py2exe','test':'__version__'},41 'comtypes': {'version': '0.6.2', 'import_name': 'comtypes', 'test': '__version__'}, 42 'pywin': {'version': '217', 'import_name': 'pywin', 'test': '__version__'}, 43 'py2exe': {'version': '0.6.9', 'import_name': 'py2exe', 'test': '__version__'}, 41 44 } 42 45 mac_required_package_list = { 43 'py2app': {'version':None,'import_name':'py2app','test':'__version__'},46 'py2app': {'version': None, 'import_name': 'py2app', 'test': '__version__'}, 44 47 } 45 48 46 49 deprecated_package_list = { 47 'pyPdf': {'version':'1.13','import_name':'pyPdf','test':'__version__'},50 'pyPdf': {'version': '1.13', 'import_name': 'pyPdf', 'test': '__version__'}, 48 51 } 49 52 50 print "Checking Required Package Versions...."51 print 52 print "Common Packages" 53 for package_name, test_vals in common_required_package_list.iteritems():53 print("Checking Required Package Versions....\n") 54 print("Common Packages") 55 56 for package_name, test_vals in common_required_package_list.items(): 54 57 try: 55 i = __import__(test_vals['import_name'], fromlist=[''])58 i = __import__(test_vals['import_name'], fromlist=['']) 56 59 if test_vals['test'] == None: 57 print "%s Installed (Unknown version)" % package_name60 print("%s Installed (Unknown version)" % package_name) 58 61 elif package_name == 'lxml': 59 verstring = str(getattr(i, 'LXML_VERSION'))60 print "%s Version Installed: %s"% (package_name,verstring.replace(', ','.').lstrip('(').rstrip(')'))62 verstring = str(getattr(i, 'LXML_VERSION')) 63 print("%s Version Installed: %s"% (package_name, verstring.replace(', ', '.').lstrip('(').rstrip(')'))) 61 64 else: 62 print "%s Version Installed: %s"% (package_name,getattr(i,test_vals['test']))63 except :64 print '%s NOT INSTALLED'% package_name65 print("%s Version Installed: %s"% (package_name, getattr(i, test_vals['test']))) 66 except ImportError: 67 print('%s NOT INSTALLED'% package_name) 65 68 66 69 if sys.platform == 'win32': 67 print 68 print "Windows Specific Packages:"69 for package_name, test_vals in win_required_package_list.iteritems():70 print("") 71 print("Windows Specific Packages:") 72 for package_name, test_vals in win_required_package_list.items(): 70 73 try: 71 74 if package_name == "pywin": 72 75 import win32api 73 fixed_file_info = win32api.GetFileVersionInfo(win32api.__file__, '\\')74 print "%s Version Installed: %s"% (package_name,fixed_file_info['FileVersionLS'] >> 16)76 fixed_file_info = win32api.GetFileVersionInfo(win32api.__file__, '\\') 77 print("%s Version Installed: %s"% (package_name, fixed_file_info['FileVersionLS'] >> 16)) 75 78 else: 76 i = __import__(test_vals['import_name'], fromlist=[''])77 print "%s Version Installed: %s"% (package_name,getattr(i,test_vals['test']))78 except :79 print '%s NOT INSTALLED'% package_name79 i = __import__(test_vals['import_name'], fromlist=['']) 80 print("%s Version Installed: %s"% (package_name, getattr(i, test_vals['test']))) 81 except ImportError: 82 print('%s NOT INSTALLED'% package_name) 80 83 81 84 if sys.platform == 'darwin': 82 print 83 print "MacOS Specific Packages:"84 for package_name, test_vals in mac_required_package_list.iteritems():85 print("") 86 print("MacOS Specific Packages:") 87 for package_name, test_vals in mac_required_package_list.items(): 85 88 try: 86 i = __import__(test_vals['import_name'], fromlist=[''])87 print "%s Version Installed: %s"% (package_name,getattr(i,test_vals['test']))88 except :89 print '%s NOT INSTALLED'% package_name89 i = __import__(test_vals['import_name'], fromlist=['']) 90 print("%s Version Installed: %s"% (package_name, getattr(i, test_vals['test']))) 91 except ImportError: 92 print('%s NOT INSTALLED'% package_name) 90 93 91 94 92 print 93 print "Deprecated Packages"94 print "You can remove these unless you need them for other reasons!"95 for package_name, test_vals in deprecated_package_list.iteritems():95 print("") 96 print("Deprecated Packages") 97 print("You can remove these unless you need them for other reasons!") 98 for package_name, test_vals in deprecated_package_list.items(): 96 99 try: 97 i = __import__(test_vals['import_name'], fromlist=[''])100 i = __import__(test_vals['import_name'], fromlist=['']) 98 101 if package_name == 'pyPdf': 99 # pyPdf doesn't have the version number internally100 print 'pyPDF Installed (Version unknown)'102 # pyPdf doesn't have the version number internally 103 print('pyPDF Installed (Version unknown)') 101 104 else: 102 print "%s Version Installed: %s"% (package_name,getattr(i,test_vals['test']))103 except :104 print '%s NOT INSTALLED'% package_name105 print("%s Version Installed: %s"% (package_name, getattr(i, test_vals['test']))) 106 except ImportError: 107 print('%s NOT INSTALLED'% package_name) -
TabularUnified run.py ¶
r18e7309 r05a9d29 20 20 from os.path import abspath, dirname, join as joinpath 21 21 22 class TeeStream: 23 def __init__(self, filename): 24 self.logfile = open(filename, 'a') 25 self.console = sys.stderr 26 def write(self, buf): 27 self.logfile.write(buf) 28 self.console.write(buf) 29 30 def tee_logging(): 31 import logging 32 stream = TeeStream(os.path.join(os.path.expanduser("~"), 'sasview.log')) 33 logging.basicConfig(level=logging.INFO, 34 format='%(asctime)s %(levelname)s %(message)s', 35 stream=stream) 22 36 23 37 def addpath(path): … … 139 153 if __name__ == "__main__": 140 154 prepare() 155 tee_logging() 141 156 from sas.sasview.sasview import run 142 157 run() -
TabularUnified sasview/README.txt ¶
r220b1e7 r9146ed9 4 4 1- Features 5 5 =========== 6 - New in Version 4.1.0 7 ------------------ 8 This incremental release brings a series of new features and improvements, 9 and a host of bug fixes. Of particular note are: 10 11 - Correlation Function Analysis (Corfunc) 12 This performs a correlation function analysis of one-dimensional SAXS/SANS data, 13 or generates a model-independent volume fraction profile from the SANS from an 14 adsorbed polymer/surfactant layer. 15 16 A correlation function may be interpreted in terms of an imaginary rod moving 17 through the structure of the material. Î1D(R) is the probability that a rod of 18 length R moving through the material has equal electron/neutron scattering 19 length density at either end. Hence a frequently occurring spacing within a 20 structure manifests itself as a peak. 21 22 A volume fraction profile \Phi(z) describes how the density of polymer 23 segments/surfactant molecules varies with distance from an (assumed locally flat) 24 interface. 25 26 - Fitting of SESANS Data 27 Data from Spin-Echo SANS measurements can now be loaded and fitted. The data will 28 be plotted against the correct axes and models will automatically perform a Hankel 29 transform in order to calculate SESANS from a SANS model. 30 31 - Documentation 32 The documentation has undergone significant checking and updating. 33 34 - Improvements 35 - Correlation function (corfunc) analysis of 1D SAS data added from CCP13 36 - File converter tool for multi-file single column data sets 37 - SESANS data loading and direct fitting using the Hankel transformation 38 - Saving and loading of simultaneous and constrained fits now supported 39 - Save states from SasView v3.x.y now loaded using sasmodel model names 40 - Saving and loading of projects with 2D fits now supported 41 - Loading a project removes all existing data, fits, and plots 42 - Structure factor and form factor can be plotted independently 43 - OpenCL is disabled by default and can be enabled through a fit menu 44 - Data and theory fields are now independently expandable 45 - Bug Fixes 46 - Fixes #667: Models computed multiple times on parameters changes 47 - Fixes #673: Custom models override built in models of same name 48 - Fixes #678: Hard crash when running complex models on GPU 49 - Fixes $774: Old style plugin models unloadable 50 - Fixes #789: stacked disk scale doesn't match cylinder model 51 - Fixes #792: core_shell_fractal uses wrong effective radius 52 - Fixes #800: Plot range reset on plot redraws 53 - Fixes #811 and #825: 2D smearing broken 54 - Fixes #815: Integer model parameter handling 55 - Fixes #824: Cannot apply sector averaging when no detector data present 56 - Fixes #830: Cansas HDF5 reader fully compliant with NXCanSAS v1.0 format 57 - Fixes #835: Fractal model breaks with negative Q values 58 - Fixes #843: Multilayer vesicle does not define effective radius 59 - Fixes #858: Hayter MSA S(Q) returns errors 60 - Numerous grammatical and contexual errors in documention 61 62 6 63 - New in Version 4.0.1 7 64 ------------------ … … 426 483 =============== 427 484 485 486 4.1- All systems: 487 The conversion to sasmodels infrastructure is ongoing and should be 488 completed in the next release. In the meantime this leads to a few known 489 issues: 490 - The way that orientation is defined is being refactored to address 491 long standing issues and comments. In release 4.1 however only models 492 with symmetry (e.g. a=b) have been converted to the new definitions. 493 The rest (a <> b <> c - e.g. parellelepiped) maintain the same 494 definition as before and will be converted in 4.2. Note that 495 orientational distribution also makes much more sense in the new 496 framework. The documentation should indicate which definition is being 497 used for a given model. 498 - The infrastructure currently handles internal conversion of old style 499 models so that user created models in previous versions should continue 500 to work for now. At some point in the future such support will go away. 501 Everyone is encouraged to convert to the new structure which should be 502 relatively straight forward and provides a number of benefits. 503 - In that vein, the distributed models and those generated by the new 504 plugin model editor are in the new format, however those generated by 505 sum|multiply models are the old style sum|multiply models. This should 506 also disappear in the near future 507 - The on the fly discovery of plugin models and changes thereto behave 508 inconsistently. If a change to a plugin model does not seem to 509 register, the Load Plugin Models (under fitting -> Plugin Model 510 Operations) can be used. However, after calling Load Plugin Models, the 511 active plugin will no longer be loaded (even though the GUI looks like 512 it is) unless it is a sum|multiply model which works properly. All 513 others will need to be recalled from the model dropdown menu to reload 514 the model into the calculation engine. While it might be annoying it 515 does not appear to prevent SasView from working.. 516 - The model code and documentation review is ongoing. At this time the 517 core shell parellelepiped is known to have the C shell effectively fixed 518 at 0 (noted in documentation) while the triaxial ellipsoid does not seem 519 to reproduce the limit of the oblate or prolate ellipsoid. If errors are 520 found and corrected, corrected versions will be uploaded to the 521 marketplace. 522 428 523 3.1- All systems: 429 524 - The documentation window may take a few seconds to load the first time -
TabularUnified sasview/local_config.py ¶
r73cbeec r1779e72 33 33 ''' remember to:''' 34 34 _acknowledgement_preamble_bullet1 =\ 35 '''Acknowledge its use in your publications as suggested below;'''35 '''Acknowledge its use in your publications as :''' 36 36 _acknowledgement_preamble_bullet2 =\ 37 '''Reference SasView as : M. Doucet, et al. SasView Version 4.0, Zenodo''' +\ 38 ''', http://doi.org/10.5281/zenodo.159083;''' 37 '''Reference SasView as:''' 39 38 _acknowledgement_preamble_bullet3 =\ 40 '''Reference the model you used if appropriate (see documentation for refs) ;'''39 '''Reference the model you used if appropriate (see documentation for refs)''' 41 40 _acknowledgement_preamble_bullet4 =\ 42 41 '''Send us your reference for our records: developers@sasview.org''' 43 42 _acknowledgement_publications = \ 44 '''This work benefited from the use of the SasView application, originally developed under NSF Award 45 DMR-0520547. SasView also contains code developed with funding from the EU Horizon 2020 programme 46 under the SINE2020 project Grant No 654000, and by Patrick O'Brien & Adam Washington.'''43 '''This work benefited from the use of the SasView application, originally developed under NSF Award DMR-0520547. SasView also contains code developed with funding from the EU Horizon 2020 programme under the SINE2020 project Grant No 654000.''' 44 _acknowledgement_citation = \ 45 '''M. Doucet et al. SasView Version 4.1, Zenodo, 10.5281/zenodo.438138''' 47 46 48 47 _acknowledgement = \ 49 '''This work was originally developed as part of the DANSE project funded by the US NSF under Award DMR-0520547, but is currently maintained 50 by a collaboration between UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO and TU Delft. SasView also contains code developed with funding from the 51 EU Horizon 2020 programme under the SINE2020 project (Grant No 654000), and by Patrick O'Brien (pycrust) and Adam Washington (corfunc-py).''' 48 '''This work was originally developed as part of the DANSE project funded by the US NSF under Award DMR-0520547,\n but is currently maintained by a collaboration between UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO, TU Delft, DLS, and the scattering community.\n\n SasView also contains code developed with funding from the EU Horizon 2020 programme under the SINE2020 project (Grant No 654000).\nA list of individual contributors can be found at: https://github.com/orgs/SasView/people 49 ''' 52 50 53 51 _homepage = "http://www.sasview.org" … … 72 70 _ansto_logo = os.path.join(icon_path, "ansto_logo.png") 73 71 _tudelft_logo = os.path.join(icon_path, "tudelft_logo.png") 72 _dls_logo = os.path.join(icon_path, "dls_logo.png") 74 73 _nsf_logo = os.path.join(icon_path, "nsf_logo.png") 75 74 _danse_logo = os.path.join(icon_path, "danse_logo.png") … … 85 84 _ansto_url = "http://www.ansto.gov.au/" 86 85 _tudelft_url = "http://www.tnw.tudelft.nl/en/cooperation/facilities/reactor-instituut-delft/" 86 _dls_url = "http://www.diamond.ac.uk/" 87 87 _danse_url = "http://www.cacr.caltech.edu/projects/danse/release/index.html" 88 88 _inst_url = "http://www.utk.edu" 89 89 _corner_image = os.path.join(icon_path, "angles_flat.png") 90 90 _welcome_image = os.path.join(icon_path, "SVwelcome.png") 91 _copyright = "(c) 2009 - 201 6, UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO and TU Delft"91 _copyright = "(c) 2009 - 2017, UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO, TU Delft and DLS" 92 92 marketplace_url = "http://marketplace.sasview.org/" 93 93 -
TabularUnified setup.py ¶
r1e13b53 red2276f 9 9 from distutils.command.build_ext import build_ext 10 10 from distutils.core import Command 11 import numpy 11 import numpy as np 12 12 13 13 # Manage version number ###################################### … … 225 225 ext_modules.append( Extension("sas.sascalc.file_converter.core.bsl_loader", 226 226 sources = [os.path.join(mydir, "bsl_loader.c")], 227 include_dirs=[n umpy.get_include()],227 include_dirs=[np.get_include()], 228 228 ) ) 229 229 … … 315 315 'test/1d_data/*', 316 316 'test/2d_data/*', 317 'test/convertible_files/*', 318 'test/coordinate_data/*', 319 'test/image_data/*', 320 'test/media/*', 321 'test/other_files/*', 317 322 'test/save_states/*', 318 'test/ upcoming_formats/*',319 'default_categories.json']323 'test/sesans_data/*' 324 ] 320 325 packages.append("sas.sasview") 321 326 -
TabularUnified src/examples/test_chisq_panel.py ¶
rc10d9d6c r9a5097c 30 30 from sas.sasgui.plottools.plottables import Plottable, Graph, Data1D, Theory1D 31 31 import sys 32 import numpy 32 import numpy as np 33 33 34 34 … … 78 78 # Construct a simple graph 79 79 if False: 80 x = n umpy.array([1,2,3,4,5,6],'d')81 y = n umpy.array([4,5,26,5,4,-1],'d')82 dy = n umpy.array([0.2, 0.3, 0.1, 0.2, 0.9, 0.3])80 x = np.array([1,2,3,4,5,6],'d') 81 y = np.array([4,5,26,5,4,-1],'d') 82 dy = np.array([0.2, 0.3, 0.1, 0.2, 0.9, 0.3]) 83 83 else: 84 x = n umpy.linspace(0,2.0, 50)85 y = n umpy.sin(2*numpy.pi*x*2.8)86 dy = n umpy.sqrt(100*numpy.abs(y))/10084 x = np.linspace(0,2.0, 50) 85 y = np.sin(2*np.pi*x*2.8) 86 dy = np.sqrt(100*np.abs(y))/100 87 87 88 88 from sas.sasgui.plottools.plottables import Data1D, Theory1D, Chisq , Graph -
TabularUnified src/examples/test_copy_print.py ¶
rd7bb526 r9a5097c 30 30 import sys 31 31 sys.platform = 'win95' 32 import numpy 32 import numpy as np 33 33 34 34 … … 81 81 def sample_graph(): 82 82 # Construct a simple graph 83 x = n umpy.linspace(0,2.0, 50)84 y = n umpy.sin(2*numpy.pi*x*2.8)85 dy = n umpy.sqrt(100*numpy.abs(y))/10083 x = np.linspace(0,2.0, 50) 84 y = np.sin(2*np.pi*x*2.8) 85 dy = np.sqrt(100*np.abs(y))/100 86 86 87 87 data = Data1D(x,y,dy=dy) -
TabularUnified src/examples/test_panel.py ¶
rd7bb526 r9a5097c 39 39 from sas.sasgui.plottools.PlotPanel import PlotPanel 40 40 from sas.sasgui.plottools.plottables import Data1D 41 import 42 import numpy 41 import sys 42 import numpy as np 43 43 import random, math 44 44 … … 157 157 def _add_data(self, event): 158 158 data_len = 50 159 x = n umpy.zeros(data_len)160 y = n umpy.zeros(data_len)161 x2 = n umpy.zeros(data_len)162 y2 = n umpy.zeros(data_len)163 dy2 = n umpy.zeros(data_len)164 x3 = n umpy.zeros(data_len)165 y3 = n umpy.zeros(data_len)166 dy3 = n umpy.zeros(data_len)159 x = np.zeros(data_len) 160 y = np.zeros(data_len) 161 x2 = np.zeros(data_len) 162 y2 = np.zeros(data_len) 163 dy2 = np.zeros(data_len) 164 x3 = np.zeros(data_len) 165 y3 = np.zeros(data_len) 166 dy3 = np.zeros(data_len) 167 167 for i in range(len(x)): 168 168 x[i] = i -
TabularUnified src/examples/test_panel2D.py ¶
rd7bb526 r9a5097c 40 40 from sas.sasgui.plottools.plottables import Data1D, Theory1D, Data2D 41 41 import sys,os 42 import numpy 42 import numpy as np 43 43 import random, math 44 44 … … 226 226 def _add_data(self, event): 227 227 data_len = 50 228 x = n umpy.zeros(data_len)229 y = n umpy.zeros(data_len)230 x2 = n umpy.zeros(data_len)231 y2 = n umpy.zeros(data_len)232 dy2 = n umpy.zeros(data_len)233 x3 = n umpy.zeros(data_len)234 y3 = n umpy.zeros(data_len)235 dy3 = n umpy.zeros(data_len)228 x = np.zeros(data_len) 229 y = np.zeros(data_len) 230 x2 = np.zeros(data_len) 231 y2 = np.zeros(data_len) 232 dy2 = np.zeros(data_len) 233 x3 = np.zeros(data_len) 234 y3 = np.zeros(data_len) 235 dy3 = np.zeros(data_len) 236 236 for i in range(len(x)): 237 237 x[i] = i -
TabularUnified src/sas/sascalc/calculator/BaseComponent.py ¶
rdeddda1 r9a5097c 9 9 from collections import OrderedDict 10 10 11 import numpy 11 import numpy as np 12 12 #TO DO: that about a way to make the parameter 13 13 #is self return if it is fittable or not … … 119 119 Then get :: 120 120 121 q = n umpy.sqrt(qx_prime^2+qy_prime^2)121 q = np.sqrt(qx_prime^2+qy_prime^2) 122 122 123 123 that is a qr in 1D array; :: … … 150 150 151 151 # calculate q_r component for 2D isotropic 152 q = n umpy.sqrt(qx**2+qy**2)152 q = np.sqrt(qx**2+qy**2) 153 153 # vectorize the model function runXY 154 v_model = n umpy.vectorize(self.runXY, otypes=[float])154 v_model = np.vectorize(self.runXY, otypes=[float]) 155 155 # calculate the scattering 156 156 iq_array = v_model(q) … … 160 160 elif qdist.__class__.__name__ == 'ndarray': 161 161 # We have a simple 1D distribution of q-values 162 v_model = n umpy.vectorize(self.runXY, otypes=[float])162 v_model = np.vectorize(self.runXY, otypes=[float]) 163 163 iq_array = v_model(qdist) 164 164 return iq_array -
TabularUnified src/sas/sascalc/calculator/instrument.py ¶
rb699768 r9a5097c 3 3 control instrumental parameters 4 4 """ 5 import numpy 5 import numpy as np 6 6 7 7 # defaults in cgs unit … … 168 168 self.spectrum = self.get_default_spectrum() 169 169 # intensity in counts/sec 170 self.intensity = n umpy.interp(self.wavelength,170 self.intensity = np.interp(self.wavelength, 171 171 self.spectrum[0], 172 172 self.spectrum[1], … … 203 203 """ 204 204 spectrum = self.spectrum 205 intensity = n umpy.interp(self.wavelength,205 intensity = np.interp(self.wavelength, 206 206 spectrum[0], 207 207 spectrum[1], … … 244 244 self.wavelength = wavelength 245 245 validate(wavelength) 246 self.intensity = n umpy.interp(self.wavelength,246 self.intensity = np.interp(self.wavelength, 247 247 self.spectrum[0], 248 248 self.spectrum[1], … … 305 305 get default spectrum 306 306 """ 307 return n umpy.array(_LAMBDA_ARRAY)307 return np.array(_LAMBDA_ARRAY) 308 308 309 309 def get_band(self): … … 345 345 get list of the intensity wrt wavelength_list 346 346 """ 347 out = n umpy.interp(self.wavelength_list,347 out = np.interp(self.wavelength_list, 348 348 self.spectrum[0], 349 349 self.spectrum[1], -
TabularUnified src/sas/sascalc/calculator/resolution_calculator.py ¶
rb699768 r9a5097c 12 12 from math import sqrt 13 13 import math 14 import numpy 14 import numpy as np 15 15 import sys 16 16 import logging … … 393 393 dx_size = (self.qx_max - self.qx_min) / (1000 - 1) 394 394 dy_size = (self.qy_max - self.qy_min) / (1000 - 1) 395 x_val = n umpy.arange(self.qx_min, self.qx_max, dx_size)396 y_val = n umpy.arange(self.qy_max, self.qy_min, -dy_size)397 q_1, q_2 = n umpy.meshgrid(x_val, y_val)395 x_val = np.arange(self.qx_min, self.qx_max, dx_size) 396 y_val = np.arange(self.qy_max, self.qy_min, -dy_size) 397 q_1, q_2 = np.meshgrid(x_val, y_val) 398 398 #q_phi = numpy.arctan(q_1,q_2) 399 399 # check whether polar or cartesian … … 887 887 x_value = x_val - x0_val 888 888 y_value = y_val - y0_val 889 phi_i = n umpy.arctan2(y_val, x_val)889 phi_i = np.arctan2(y_val, x_val) 890 890 891 891 # phi correction due to the gravity shift (in phi) … … 893 893 phi_i = phi_i - phi_0 + self.gravity_phi 894 894 895 sin_phi = n umpy.sin(self.gravity_phi)896 cos_phi = n umpy.cos(self.gravity_phi)895 sin_phi = np.sin(self.gravity_phi) 896 cos_phi = np.cos(self.gravity_phi) 897 897 898 898 x_p = x_value * cos_phi + y_value * sin_phi … … 908 908 nu_value = -0.5 * (new_x * new_x + new_y * new_y) 909 909 910 gaussian = n umpy.exp(nu_value)910 gaussian = np.exp(nu_value) 911 911 # normalizing factor correction 912 912 gaussian /= gaussian.sum() … … 954 954 nu_value *= nu_value 955 955 nu_value *= -0.5 956 gaussian *= n umpy.exp(nu_value)956 gaussian *= np.exp(nu_value) 957 957 gaussian /= sigma 958 958 # normalize … … 1026 1026 offset_x, offset_y) 1027 1027 # distance [cm] from the beam center on detector plane 1028 detector_ind_x = n umpy.arange(detector_pix_nums_x)1029 detector_ind_y = n umpy.arange(detector_pix_nums_y)1028 detector_ind_x = np.arange(detector_pix_nums_x) 1029 detector_ind_y = np.arange(detector_pix_nums_y) 1030 1030 1031 1031 # shif 0.5 pixel so that pix position is at the center of the pixel … … 1041 1041 detector_ind_y = detector_ind_y * pix_y_size 1042 1042 1043 qx_value = n umpy.zeros(len(detector_ind_x))1044 qy_value = n umpy.zeros(len(detector_ind_y))1043 qx_value = np.zeros(len(detector_ind_x)) 1044 qy_value = np.zeros(len(detector_ind_y)) 1045 1045 i = 0 1046 1046 … … 1061 1061 1062 1062 # p min and max values among the center of pixels 1063 self.qx_min = n umpy.min(qx_value)1064 self.qx_max = n umpy.max(qx_value)1065 self.qy_min = n umpy.min(qy_value)1066 self.qy_max = n umpy.max(qy_value)1063 self.qx_min = np.min(qx_value) 1064 self.qx_max = np.max(qx_value) 1065 self.qy_min = np.min(qy_value) 1066 self.qy_max = np.max(qy_value) 1067 1067 1068 1068 # Appr. min and max values of the detector display limits … … 1088 1088 from sas.sascalc.dataloader.data_info import Data2D 1089 1089 output = Data2D() 1090 inten = n umpy.zeros_like(qx_value)1090 inten = np.zeros_like(qx_value) 1091 1091 output.data = inten 1092 1092 output.qx_data = qx_value … … 1107 1107 plane_dist = dx_size 1108 1108 # full scattering angle on the x-axis 1109 theta = n umpy.arctan(plane_dist / det_dist)1110 qx_value = (2.0 * pi / wavelength) * n umpy.sin(theta)1109 theta = np.arctan(plane_dist / det_dist) 1110 qx_value = (2.0 * pi / wavelength) * np.sin(theta) 1111 1111 return qx_value 1112 1112 -
TabularUnified src/sas/sascalc/calculator/sas_gen.py ¶
rd2fd8fc r9a5097c 7 7 from periodictable import formula 8 8 from periodictable import nsf 9 import numpy 9 import numpy as np 10 10 import os 11 11 import copy … … 80 80 ## Parameter details [units, min, max] 81 81 self.details = {} 82 self.details['scale'] = ['', 0.0, n umpy.inf]83 self.details['background'] = ['[1/cm]', 0.0, n umpy.inf]84 self.details['solvent_SLD'] = ['1/A^(2)', -n umpy.inf, numpy.inf]85 self.details['total_volume'] = ['A^(3)', 0.0, n umpy.inf]82 self.details['scale'] = ['', 0.0, np.inf] 83 self.details['background'] = ['[1/cm]', 0.0, np.inf] 84 self.details['solvent_SLD'] = ['1/A^(2)', -np.inf, np.inf] 85 self.details['total_volume'] = ['A^(3)', 0.0, np.inf] 86 86 self.details['Up_frac_in'] = ['[u/(u+d)]', 0.0, 1.0] 87 87 self.details['Up_frac_out'] = ['[u/(u+d)]', 0.0, 1.0] 88 self.details['Up_theta'] = ['[deg]', -n umpy.inf, numpy.inf]88 self.details['Up_theta'] = ['[deg]', -np.inf, np.inf] 89 89 # fixed parameters 90 90 self.fixed = [] … … 171 171 msg = "Not a 1D." 172 172 raise ValueError, msg 173 i_out = n umpy.zeros_like(x[0])173 i_out = np.zeros_like(x[0]) 174 174 # 1D I is found at y =0 in the 2D pattern 175 175 out = self._gen(x[0], [], i_out) … … 187 187 """ 188 188 if x.__class__.__name__ == 'list': 189 i_out = n umpy.zeros_like(x[0])189 i_out = np.zeros_like(x[0]) 190 190 out = self._gen(x[0], x[1], i_out) 191 191 return out … … 237 237 self.omfdata = omfdata 238 238 length = int(omfdata.xnodes * omfdata.ynodes * omfdata.znodes) 239 pos_x = n umpy.arange(omfdata.xmin,239 pos_x = np.arange(omfdata.xmin, 240 240 omfdata.xnodes*omfdata.xstepsize + omfdata.xmin, 241 241 omfdata.xstepsize) 242 pos_y = n umpy.arange(omfdata.ymin,242 pos_y = np.arange(omfdata.ymin, 243 243 omfdata.ynodes*omfdata.ystepsize + omfdata.ymin, 244 244 omfdata.ystepsize) 245 pos_z = n umpy.arange(omfdata.zmin,245 pos_z = np.arange(omfdata.zmin, 246 246 omfdata.znodes*omfdata.zstepsize + omfdata.zmin, 247 247 omfdata.zstepsize) 248 self.pos_x = n umpy.tile(pos_x, int(omfdata.ynodes * omfdata.znodes))248 self.pos_x = np.tile(pos_x, int(omfdata.ynodes * omfdata.znodes)) 249 249 self.pos_y = pos_y.repeat(int(omfdata.xnodes)) 250 self.pos_y = n umpy.tile(self.pos_y, int(omfdata.znodes))250 self.pos_y = np.tile(self.pos_y, int(omfdata.znodes)) 251 251 self.pos_z = pos_z.repeat(int(omfdata.xnodes * omfdata.ynodes)) 252 252 self.mx = omfdata.mx 253 253 self.my = omfdata.my 254 254 self.mz = omfdata.mz 255 self.sld_n = n umpy.zeros(length)255 self.sld_n = np.zeros(length) 256 256 257 257 if omfdata.mx == None: 258 self.mx = n umpy.zeros(length)258 self.mx = np.zeros(length) 259 259 if omfdata.my == None: 260 self.my = n umpy.zeros(length)260 self.my = np.zeros(length) 261 261 if omfdata.mz == None: 262 self.mz = n umpy.zeros(length)262 self.mz = np.zeros(length) 263 263 264 264 self._check_data_length(length) 265 265 self.remove_null_points(False, False) 266 mask = n umpy.ones(len(self.sld_n), dtype=bool)266 mask = np.ones(len(self.sld_n), dtype=bool) 267 267 if shape.lower() == 'ellipsoid': 268 268 try: … … 328 328 """ 329 329 if remove: 330 is_nonzero = (n umpy.fabs(self.mx) + numpy.fabs(self.my) +331 n umpy.fabs(self.mz)).nonzero()330 is_nonzero = (np.fabs(self.mx) + np.fabs(self.my) + 331 np.fabs(self.mz)).nonzero() 332 332 if len(is_nonzero[0]) > 0: 333 333 self.pos_x = self.pos_x[is_nonzero] … … 369 369 """ 370 370 desc = "" 371 mx = n umpy.zeros(0)372 my = n umpy.zeros(0)373 mz = n umpy.zeros(0)371 mx = np.zeros(0) 372 my = np.zeros(0) 373 mz = np.zeros(0) 374 374 try: 375 375 input_f = open(path, 'rb') … … 389 389 _my = mag2sld(_my, valueunit) 390 390 _mz = mag2sld(_mz, valueunit) 391 mx = n umpy.append(mx, _mx)392 my = n umpy.append(my, _my)393 mz = n umpy.append(mz, _mz)391 mx = np.append(mx, _mx) 392 my = np.append(my, _my) 393 mz = np.append(mz, _mz) 394 394 except: 395 395 # Skip non-data lines … … 501 501 :raise RuntimeError: when the file can't be opened 502 502 """ 503 pos_x = n umpy.zeros(0)504 pos_y = n umpy.zeros(0)505 pos_z = n umpy.zeros(0)506 sld_n = n umpy.zeros(0)507 sld_mx = n umpy.zeros(0)508 sld_my = n umpy.zeros(0)509 sld_mz = n umpy.zeros(0)510 vol_pix = n umpy.zeros(0)511 pix_symbol = n umpy.zeros(0)503 pos_x = np.zeros(0) 504 pos_y = np.zeros(0) 505 pos_z = np.zeros(0) 506 sld_n = np.zeros(0) 507 sld_mx = np.zeros(0) 508 sld_my = np.zeros(0) 509 sld_mz = np.zeros(0) 510 vol_pix = np.zeros(0) 511 pix_symbol = np.zeros(0) 512 512 x_line = [] 513 513 y_line = [] … … 543 543 _pos_y = float(line[38:46].strip()) 544 544 _pos_z = float(line[46:54].strip()) 545 pos_x = n umpy.append(pos_x, _pos_x)546 pos_y = n umpy.append(pos_y, _pos_y)547 pos_z = n umpy.append(pos_z, _pos_z)545 pos_x = np.append(pos_x, _pos_x) 546 pos_y = np.append(pos_y, _pos_y) 547 pos_z = np.append(pos_z, _pos_z) 548 548 try: 549 549 val = nsf.neutron_sld(atom_name)[0] 550 550 # sld in Ang^-2 unit 551 551 val *= 1.0e-6 552 sld_n = n umpy.append(sld_n, val)552 sld_n = np.append(sld_n, val) 553 553 atom = formula(atom_name) 554 554 # cm to A units 555 555 vol = 1.0e+24 * atom.mass / atom.density / NA 556 vol_pix = n umpy.append(vol_pix, vol)556 vol_pix = np.append(vol_pix, vol) 557 557 except: 558 558 print "Error: set the sld of %s to zero"% atom_name 559 sld_n = n umpy.append(sld_n, 0.0)560 sld_mx = n umpy.append(sld_mx, 0)561 sld_my = n umpy.append(sld_my, 0)562 sld_mz = n umpy.append(sld_mz, 0)563 pix_symbol = n umpy.append(pix_symbol, atom_name)559 sld_n = np.append(sld_n, 0.0) 560 sld_mx = np.append(sld_mx, 0) 561 sld_my = np.append(sld_my, 0) 562 sld_mz = np.append(sld_mz, 0) 563 pix_symbol = np.append(pix_symbol, atom_name) 564 564 elif line[0:6].strip().count('CONECT') > 0: 565 565 toks = line.split() … … 630 630 """ 631 631 try: 632 pos_x = n umpy.zeros(0)633 pos_y = n umpy.zeros(0)634 pos_z = n umpy.zeros(0)635 sld_n = n umpy.zeros(0)636 sld_mx = n umpy.zeros(0)637 sld_my = n umpy.zeros(0)638 sld_mz = n umpy.zeros(0)632 pos_x = np.zeros(0) 633 pos_y = np.zeros(0) 634 pos_z = np.zeros(0) 635 sld_n = np.zeros(0) 636 sld_mx = np.zeros(0) 637 sld_my = np.zeros(0) 638 sld_mz = np.zeros(0) 639 639 try: 640 640 # Use numpy to speed up loading 641 input_f = n umpy.loadtxt(path, dtype='float', skiprows=1,641 input_f = np.loadtxt(path, dtype='float', skiprows=1, 642 642 ndmin=1, unpack=True) 643 pos_x = n umpy.array(input_f[0])644 pos_y = n umpy.array(input_f[1])645 pos_z = n umpy.array(input_f[2])646 sld_n = n umpy.array(input_f[3])647 sld_mx = n umpy.array(input_f[4])648 sld_my = n umpy.array(input_f[5])649 sld_mz = n umpy.array(input_f[6])643 pos_x = np.array(input_f[0]) 644 pos_y = np.array(input_f[1]) 645 pos_z = np.array(input_f[2]) 646 sld_n = np.array(input_f[3]) 647 sld_mx = np.array(input_f[4]) 648 sld_my = np.array(input_f[5]) 649 sld_mz = np.array(input_f[6]) 650 650 ncols = len(input_f) 651 651 if ncols == 8: 652 vol_pix = n umpy.array(input_f[7])652 vol_pix = np.array(input_f[7]) 653 653 elif ncols == 7: 654 654 vol_pix = None … … 669 669 _sld_my = float(toks[5]) 670 670 _sld_mz = float(toks[6]) 671 pos_x = n umpy.append(pos_x, _pos_x)672 pos_y = n umpy.append(pos_y, _pos_y)673 pos_z = n umpy.append(pos_z, _pos_z)674 sld_n = n umpy.append(sld_n, _sld_n)675 sld_mx = n umpy.append(sld_mx, _sld_mx)676 sld_my = n umpy.append(sld_my, _sld_my)677 sld_mz = n umpy.append(sld_mz, _sld_mz)671 pos_x = np.append(pos_x, _pos_x) 672 pos_y = np.append(pos_y, _pos_y) 673 pos_z = np.append(pos_z, _pos_z) 674 sld_n = np.append(sld_n, _sld_n) 675 sld_mx = np.append(sld_mx, _sld_mx) 676 sld_my = np.append(sld_my, _sld_my) 677 sld_mz = np.append(sld_mz, _sld_mz) 678 678 try: 679 679 _vol_pix = float(toks[7]) 680 vol_pix = n umpy.append(vol_pix, _vol_pix)680 vol_pix = np.append(vol_pix, _vol_pix) 681 681 except: 682 682 vol_pix = None … … 712 712 sld_n = data.sld_n 713 713 if sld_n == None: 714 sld_n = n umpy.zeros(length)714 sld_n = np.zeros(length) 715 715 sld_mx = data.sld_mx 716 716 if sld_mx == None: 717 sld_mx = n umpy.zeros(length)718 sld_my = n umpy.zeros(length)719 sld_mz = n umpy.zeros(length)717 sld_mx = np.zeros(length) 718 sld_my = np.zeros(length) 719 sld_mz = np.zeros(length) 720 720 else: 721 721 sld_my = data.sld_my … … 893 893 if self.is_data: 894 894 # For data, put the value to only the pixels w non-zero M 895 is_nonzero = (n umpy.fabs(self.sld_mx) +896 n umpy.fabs(self.sld_my) +897 n umpy.fabs(self.sld_mz)).nonzero()898 self.sld_n = n umpy.zeros(len(self.pos_x))895 is_nonzero = (np.fabs(self.sld_mx) + 896 np.fabs(self.sld_my) + 897 np.fabs(self.sld_mz)).nonzero() 898 self.sld_n = np.zeros(len(self.pos_x)) 899 899 if len(self.sld_n[is_nonzero]) > 0: 900 900 self.sld_n[is_nonzero] = sld_n … … 903 903 else: 904 904 # For non-data, put the value to all the pixels 905 self.sld_n = n umpy.ones(len(self.pos_x)) * sld_n905 self.sld_n = np.ones(len(self.pos_x)) * sld_n 906 906 else: 907 907 self.sld_n = sld_n … … 912 912 """ 913 913 if sld_mx.__class__.__name__ == 'float': 914 self.sld_mx = n umpy.ones(len(self.pos_x)) * sld_mx914 self.sld_mx = np.ones(len(self.pos_x)) * sld_mx 915 915 else: 916 916 self.sld_mx = sld_mx 917 917 if sld_my.__class__.__name__ == 'float': 918 self.sld_my = n umpy.ones(len(self.pos_x)) * sld_my918 self.sld_my = np.ones(len(self.pos_x)) * sld_my 919 919 else: 920 920 self.sld_my = sld_my 921 921 if sld_mz.__class__.__name__ == 'float': 922 self.sld_mz = n umpy.ones(len(self.pos_x)) * sld_mz922 self.sld_mz = np.ones(len(self.pos_x)) * sld_mz 923 923 else: 924 924 self.sld_mz = sld_mz 925 925 926 sld_m = n umpy.sqrt(sld_mx * sld_mx + sld_my * sld_my + \926 sld_m = np.sqrt(sld_mx * sld_mx + sld_my * sld_my + \ 927 927 sld_mz * sld_mz) 928 928 self.sld_m = sld_m … … 936 936 return 937 937 if symbol.__class__.__name__ == 'str': 938 self.pix_symbol = n umpy.repeat(symbol, len(self.sld_n))938 self.pix_symbol = np.repeat(symbol, len(self.sld_n)) 939 939 else: 940 940 self.pix_symbol = symbol … … 950 950 self.vol_pix = vol 951 951 elif vol.__class__.__name__.count('float') > 0: 952 self.vol_pix = n umpy.repeat(vol, len(self.sld_n))952 self.vol_pix = np.repeat(vol, len(self.sld_n)) 953 953 else: 954 954 self.vol_pix = None … … 993 993 for x_pos in self.pos_x: 994 994 if xpos_pre != x_pos: 995 self.xstepsize = n umpy.fabs(x_pos - xpos_pre)995 self.xstepsize = np.fabs(x_pos - xpos_pre) 996 996 break 997 997 for y_pos in self.pos_y: 998 998 if ypos_pre != y_pos: 999 self.ystepsize = n umpy.fabs(y_pos - ypos_pre)999 self.ystepsize = np.fabs(y_pos - ypos_pre) 1000 1000 break 1001 1001 for z_pos in self.pos_z: 1002 1002 if zpos_pre != z_pos: 1003 self.zstepsize = n umpy.fabs(z_pos - zpos_pre)1003 self.zstepsize = np.fabs(z_pos - zpos_pre) 1004 1004 break 1005 1005 #default pix volume 1006 self.vol_pix = n umpy.ones(len(self.pos_x))1006 self.vol_pix = np.ones(len(self.pos_x)) 1007 1007 vol = self.xstepsize * self.ystepsize * self.zstepsize 1008 1008 self.set_pixel_volumes(vol) … … 1071 1071 y2 = output.pos_y+output.sld_my/max_m * gap 1072 1072 z2 = output.pos_z+output.sld_mz/max_m * gap 1073 x_arrow = n umpy.column_stack((output.pos_x, x2))1074 y_arrow = n umpy.column_stack((output.pos_y, y2))1075 z_arrow = n umpy.column_stack((output.pos_z, z2))1073 x_arrow = np.column_stack((output.pos_x, x2)) 1074 y_arrow = np.column_stack((output.pos_y, y2)) 1075 z_arrow = np.column_stack((output.pos_z, z2)) 1076 1076 unit_x2 = output.sld_mx / max_m 1077 1077 unit_y2 = output.sld_my / max_m 1078 1078 unit_z2 = output.sld_mz / max_m 1079 color_x = n umpy.fabs(unit_x2 * 0.8)1080 color_y = n umpy.fabs(unit_y2 * 0.8)1081 color_z = n umpy.fabs(unit_z2 * 0.8)1082 colors = n umpy.column_stack((color_x, color_y, color_z))1079 color_x = np.fabs(unit_x2 * 0.8) 1080 color_y = np.fabs(unit_y2 * 0.8) 1081 color_z = np.fabs(unit_z2 * 0.8) 1082 colors = np.column_stack((color_x, color_y, color_z)) 1083 1083 plt.show() 1084 1084 … … 1103 1103 model = GenSAS() 1104 1104 model.set_sld_data(foutput.output) 1105 x = n umpy.arange(1000)/10000. + 1e-51106 y = n umpy.arange(1000)/10000. + 1e-51107 i = n umpy.zeros(1000)1105 x = np.arange(1000)/10000. + 1e-5 1106 y = np.arange(1000)/10000. + 1e-5 1107 i = np.zeros(1000) 1108 1108 model.runXY([x, y, i]) 1109 1109 -
TabularUnified src/sas/sascalc/calculator/slit_length_calculator.py ¶
rb699768 rbfba720 16 16 # y data 17 17 self.y = None 18 # default slit length18 # default slit length 19 19 self.slit_length = 0.0 20 20 … … 42 42 """ 43 43 # None data do nothing 44 if self.y == None or self.x ==None:44 if self.y is None or self.x is None: 45 45 return 46 46 # set local variable … … 54 54 y_sum = 0.0 55 55 y_max = 0.0 56 ind = 0 .056 ind = 0 57 57 58 58 # sum 10 or more y values until getting max_y, … … 70 70 # defaults 71 71 y_half_d = 0.0 72 ind = 0 .072 ind = 0 73 73 # find indices where it crosses y = y_half. 74 74 while True: … … 81 81 82 82 # y value and ind just before passed the spot of the half height 83 y_half_u = y[ind -1]83 y_half_u = y[ind - 1] 84 84 85 85 # get corresponding x values 86 86 x_half_d = x[ind] 87 x_half_u = x[ind -1]87 x_half_u = x[ind - 1] 88 88 89 89 # calculate x at y = y_half using linear interpolation … … 91 91 x_half = (x_half_d + x_half_u)/2.0 92 92 else: 93 x_half = ( x_half_u * (y_half - y_half_d) \94 + x_half_d * (y_half_u - y_half)) \95 / (y_half_u - y_half_d)93 x_half = ((x_half_u * (y_half - y_half_d) 94 + x_half_d * (y_half_u - y_half)) 95 / (y_half_u - y_half_d)) 96 96 97 97 # Our slit length is half width, so just give half beam value -
TabularUnified src/sas/sascalc/data_util/err1d.py ¶
rb699768 r9a5097c 8 8 """ 9 9 from __future__ import division # Get true division 10 import numpy 10 import numpy as np 11 11 12 12 … … 59 59 def exp(X, varX): 60 60 """Exponentiation with error propagation""" 61 Z = n umpy.exp(X)61 Z = np.exp(X) 62 62 varZ = varX * Z**2 63 63 return Z, varZ … … 66 66 def log(X, varX): 67 67 """Logarithm with error propagation""" 68 Z = n umpy.log(X)68 Z = np.log(X) 69 69 varZ = varX / X**2 70 70 return Z, varZ … … 73 73 # def pow(X,varX, Y,varY): 74 74 # Z = X**Y 75 # varZ = (Y**2 * varX/X**2 + varY * n umpy.log(X)**2) * Z**275 # varZ = (Y**2 * varX/X**2 + varY * np.log(X)**2) * Z**2 76 76 # return Z,varZ 77 77 # -
TabularUnified src/sas/sascalc/data_util/formatnum.py ¶
rb699768 r9a5097c 40 40 41 41 import math 42 import numpy 42 import numpy as np 43 43 __all__ = ['format_uncertainty', 'format_uncertainty_pm', 44 44 'format_uncertainty_compact'] … … 102 102 """ 103 103 # Handle indefinite value 104 if n umpy.isinf(value):104 if np.isinf(value): 105 105 return "inf" if value > 0 else "-inf" 106 if n umpy.isnan(value):106 if np.isnan(value): 107 107 return "NaN" 108 108 109 109 # Handle indefinite uncertainty 110 if uncertainty is None or uncertainty <= 0 or n umpy.isnan(uncertainty):110 if uncertainty is None or uncertainty <= 0 or np.isnan(uncertainty): 111 111 return "%g" % value 112 if n umpy.isinf(uncertainty):112 if np.isinf(uncertainty): 113 113 if compact: 114 114 return "%.2g(inf)" % value … … 279 279 280 280 # non-finite values 281 assert value_str(-n umpy.inf,None) == "-inf"282 assert value_str(n umpy.inf,None) == "inf"283 assert value_str(n umpy.NaN,None) == "NaN"281 assert value_str(-np.inf,None) == "-inf" 282 assert value_str(np.inf,None) == "inf" 283 assert value_str(np.NaN,None) == "NaN" 284 284 285 285 # bad or missing uncertainty 286 assert value_str(-1.23567,n umpy.NaN) == "-1.23567"287 assert value_str(-1.23567,-n umpy.inf) == "-1.23567"286 assert value_str(-1.23567,np.NaN) == "-1.23567" 287 assert value_str(-1.23567,-np.inf) == "-1.23567" 288 288 assert value_str(-1.23567,-0.1) == "-1.23567" 289 289 assert value_str(-1.23567,0) == "-1.23567" 290 290 assert value_str(-1.23567,None) == "-1.23567" 291 assert value_str(-1.23567,n umpy.inf) == "-1.2(inf)"291 assert value_str(-1.23567,np.inf) == "-1.2(inf)" 292 292 293 293 def test_pm(): … … 410 410 411 411 # non-finite values 412 assert value_str(-n umpy.inf,None) == "-inf"413 assert value_str(n umpy.inf,None) == "inf"414 assert value_str(n umpy.NaN,None) == "NaN"412 assert value_str(-np.inf,None) == "-inf" 413 assert value_str(np.inf,None) == "inf" 414 assert value_str(np.NaN,None) == "NaN" 415 415 416 416 # bad or missing uncertainty 417 assert value_str(-1.23567,n umpy.NaN) == "-1.23567"418 assert value_str(-1.23567,-n umpy.inf) == "-1.23567"417 assert value_str(-1.23567,np.NaN) == "-1.23567" 418 assert value_str(-1.23567,-np.inf) == "-1.23567" 419 419 assert value_str(-1.23567,-0.1) == "-1.23567" 420 420 assert value_str(-1.23567,0) == "-1.23567" 421 421 assert value_str(-1.23567,None) == "-1.23567" 422 assert value_str(-1.23567,n umpy.inf) == "-1.2 +/- inf"422 assert value_str(-1.23567,np.inf) == "-1.2 +/- inf" 423 423 424 424 def test_default(): -
TabularUnified src/sas/sascalc/data_util/qsmearing.py ¶
r775e0b7 r9a5097c 9 9 #copyright 2008, University of Tennessee 10 10 ###################################################################### 11 import numpy12 11 import math 13 12 import logging … … 60 59 if data.dx is not None and data.isSesans: 61 60 #if data.dx[0] > 0.0: 62 if n umpy.size(data.dx[data.dx <= 0]) == 0:61 if np.size(data.dx[data.dx <= 0]) == 0: 63 62 _found_sesans = True 64 63 # if data.dx[0] <= 0.0: 65 if n umpy.size(data.dx[data.dx <= 0]) > 0:64 if np.size(data.dx[data.dx <= 0]) > 0: 66 65 raise ValueError('one or more of your dx values are negative, please check the data file!') 67 66 … … 121 120 self.resolution = resolution 122 121 if offset is None: 123 offset = n umpy.searchsorted(self.resolution.q_calc, self.resolution.q[0])122 offset = np.searchsorted(self.resolution.q_calc, self.resolution.q[0]) 124 123 self.offset = offset 125 124 … … 137 136 start, end = first_bin + self.offset, last_bin + self.offset 138 137 q_calc = self.resolution.q_calc 139 iq_calc = n umpy.empty_like(q_calc)138 iq_calc = np.empty_like(q_calc) 140 139 if start > 0: 141 140 iq_calc[:start] = self.model.evalDistribution(q_calc[:start]) … … 157 156 """ 158 157 q = self.resolution.q 159 first = n umpy.searchsorted(q, q_min)160 last = n umpy.searchsorted(q, q_max)158 first = np.searchsorted(q, q_min) 159 last = np.searchsorted(q, q_max) 161 160 return first, min(last,len(q)-1) 162 161 -
TabularUnified src/sas/sascalc/data_util/uncertainty.py ¶
rb699768 r9a5097c 17 17 from __future__ import division 18 18 19 import numpy 19 import numpy as np 20 20 import err1d 21 21 from formatnum import format_uncertainty … … 27 27 class Uncertainty(object): 28 28 # Make standard deviation available 29 def _getdx(self): return n umpy.sqrt(self.variance)29 def _getdx(self): return np.sqrt(self.variance) 30 30 def _setdx(self,dx): 31 31 # Direct operation … … 144 144 return self 145 145 def __abs__(self): 146 return Uncertainty(n umpy.abs(self.x),self.variance)146 return Uncertainty(np.abs(self.x),self.variance) 147 147 148 148 def __str__(self): 149 #return str(self.x)+" +/- "+str(n umpy.sqrt(self.variance))150 if n umpy.isscalar(self.x):151 return format_uncertainty(self.x,n umpy.sqrt(self.variance))149 #return str(self.x)+" +/- "+str(np.sqrt(self.variance)) 150 if np.isscalar(self.x): 151 return format_uncertainty(self.x,np.sqrt(self.variance)) 152 152 else: 153 153 return [format_uncertainty(v,dv) 154 for v,dv in zip(self.x,n umpy.sqrt(self.variance))]154 for v,dv in zip(self.x,np.sqrt(self.variance))] 155 155 def __repr__(self): 156 156 return "Uncertainty(%s,%s)"%(str(self.x),str(self.variance)) … … 287 287 # =============== vector operations ================ 288 288 # Slicing 289 z = Uncertainty(n umpy.array([1,2,3,4,5]),numpy.array([2,1,2,3,2]))289 z = Uncertainty(np.array([1,2,3,4,5]),np.array([2,1,2,3,2])) 290 290 assert z[2].x == 3 and z[2].variance == 2 291 291 assert (z[2:4].x == [3,4]).all() 292 292 assert (z[2:4].variance == [2,3]).all() 293 z[2:4] = Uncertainty(n umpy.array([8,7]),numpy.array([4,5]))293 z[2:4] = Uncertainty(np.array([8,7]),np.array([4,5])) 294 294 assert z[2].x == 8 and z[2].variance == 4 295 A = Uncertainty(n umpy.array([a.x]*2),numpy.array([a.variance]*2))296 B = Uncertainty(n umpy.array([b.x]*2),numpy.array([b.variance]*2))295 A = Uncertainty(np.array([a.x]*2),np.array([a.variance]*2)) 296 B = Uncertainty(np.array([b.x]*2),np.array([b.variance]*2)) 297 297 298 298 # TODO complete tests of copy and inplace operations for vectors and slices. -
TabularUnified src/sas/sascalc/dataloader/data_info.py ¶
r2ffe241 r9a5097c 23 23 #from sas.guitools.plottables import Data1D as plottable_1D 24 24 from sas.sascalc.data_util.uncertainty import Uncertainty 25 import numpy 25 import numpy as np 26 26 import math 27 27 … … 51 51 52 52 def __init__(self, x, y, dx=None, dy=None, dxl=None, dxw=None, lam=None, dlam=None): 53 self.x = n umpy.asarray(x)54 self.y = n umpy.asarray(y)53 self.x = np.asarray(x) 54 self.y = np.asarray(y) 55 55 if dx is not None: 56 self.dx = n umpy.asarray(dx)56 self.dx = np.asarray(dx) 57 57 if dy is not None: 58 self.dy = n umpy.asarray(dy)58 self.dy = np.asarray(dy) 59 59 if dxl is not None: 60 self.dxl = n umpy.asarray(dxl)60 self.dxl = np.asarray(dxl) 61 61 if dxw is not None: 62 self.dxw = n umpy.asarray(dxw)62 self.dxw = np.asarray(dxw) 63 63 if lam is not None: 64 self.lam = n umpy.asarray(lam)64 self.lam = np.asarray(lam) 65 65 if dlam is not None: 66 self.dlam = n umpy.asarray(dlam)66 self.dlam = np.asarray(dlam) 67 67 68 68 def xaxis(self, label, unit): … … 109 109 qy_data=None, q_data=None, mask=None, 110 110 dqx_data=None, dqy_data=None): 111 self.data = n umpy.asarray(data)112 self.qx_data = n umpy.asarray(qx_data)113 self.qy_data = n umpy.asarray(qy_data)114 self.q_data = n umpy.asarray(q_data)115 self.mask = n umpy.asarray(mask)116 self.err_data = n umpy.asarray(err_data)111 self.data = np.asarray(data) 112 self.qx_data = np.asarray(qx_data) 113 self.qy_data = np.asarray(qy_data) 114 self.q_data = np.asarray(q_data) 115 self.mask = np.asarray(mask) 116 self.err_data = np.asarray(err_data) 117 117 if dqx_data is not None: 118 self.dqx_data = n umpy.asarray(dqx_data)118 self.dqx_data = np.asarray(dqx_data) 119 119 if dqy_data is not None: 120 self.dqy_data = n umpy.asarray(dqy_data)120 self.dqy_data = np.asarray(dqy_data) 121 121 122 122 def xaxis(self, label, unit): … … 734 734 """ 735 735 def _check(v): 736 if (v.__class__ == list or v.__class__ == n umpy.ndarray) \736 if (v.__class__ == list or v.__class__ == np.ndarray) \ 737 737 and len(v) > 0 and min(v) > 0: 738 738 return True … … 752 752 753 753 if clone is None or not issubclass(clone.__class__, Data1D): 754 x = n umpy.zeros(length)755 dx = n umpy.zeros(length)756 y = n umpy.zeros(length)757 dy = n umpy.zeros(length)758 lam = n umpy.zeros(length)759 dlam = n umpy.zeros(length)754 x = np.zeros(length) 755 dx = np.zeros(length) 756 y = np.zeros(length) 757 dy = np.zeros(length) 758 lam = np.zeros(length) 759 dlam = np.zeros(length) 760 760 clone = Data1D(x, y, lam=lam, dx=dx, dy=dy, dlam=dlam) 761 761 … … 806 806 dy_other = other.dy 807 807 if other.dy == None or (len(other.dy) != len(other.y)): 808 dy_other = n umpy.zeros(len(other.y))808 dy_other = np.zeros(len(other.y)) 809 809 810 810 # Check that we have errors, otherwise create zero vector 811 811 dy = self.dy 812 812 if self.dy == None or (len(self.dy) != len(self.y)): 813 dy = n umpy.zeros(len(self.y))813 dy = np.zeros(len(self.y)) 814 814 815 815 return dy, dy_other … … 824 824 result.dxw = None 825 825 else: 826 result.dxw = n umpy.zeros(len(self.x))826 result.dxw = np.zeros(len(self.x)) 827 827 if self.dxl == None: 828 828 result.dxl = None 829 829 else: 830 result.dxl = n umpy.zeros(len(self.x))830 result.dxl = np.zeros(len(self.x)) 831 831 832 832 for i in range(len(self.x)): … … 886 886 result.dy = None 887 887 else: 888 result.dy = n umpy.zeros(len(self.x) + len(other.x))888 result.dy = np.zeros(len(self.x) + len(other.x)) 889 889 if self.dx == None or other.dx is None: 890 890 result.dx = None 891 891 else: 892 result.dx = n umpy.zeros(len(self.x) + len(other.x))892 result.dx = np.zeros(len(self.x) + len(other.x)) 893 893 if self.dxw == None or other.dxw is None: 894 894 result.dxw = None 895 895 else: 896 result.dxw = n umpy.zeros(len(self.x) + len(other.x))896 result.dxw = np.zeros(len(self.x) + len(other.x)) 897 897 if self.dxl == None or other.dxl is None: 898 898 result.dxl = None 899 899 else: 900 result.dxl = n umpy.zeros(len(self.x) + len(other.x))901 902 result.x = n umpy.append(self.x, other.x)900 result.dxl = np.zeros(len(self.x) + len(other.x)) 901 902 result.x = np.append(self.x, other.x) 903 903 #argsorting 904 ind = n umpy.argsort(result.x)904 ind = np.argsort(result.x) 905 905 result.x = result.x[ind] 906 result.y = n umpy.append(self.y, other.y)906 result.y = np.append(self.y, other.y) 907 907 result.y = result.y[ind] 908 908 if result.dy != None: 909 result.dy = n umpy.append(self.dy, other.dy)909 result.dy = np.append(self.dy, other.dy) 910 910 result.dy = result.dy[ind] 911 911 if result.dx is not None: 912 result.dx = n umpy.append(self.dx, other.dx)912 result.dx = np.append(self.dx, other.dx) 913 913 result.dx = result.dx[ind] 914 914 if result.dxw is not None: 915 result.dxw = n umpy.append(self.dxw, other.dxw)915 result.dxw = np.append(self.dxw, other.dxw) 916 916 result.dxw = result.dxw[ind] 917 917 if result.dxl is not None: 918 result.dxl = n umpy.append(self.dxl, other.dxl)918 result.dxl = np.append(self.dxl, other.dxl) 919 919 result.dxl = result.dxl[ind] 920 920 return result … … 970 970 971 971 if clone is None or not issubclass(clone.__class__, Data2D): 972 data = n umpy.zeros(length)973 err_data = n umpy.zeros(length)974 qx_data = n umpy.zeros(length)975 qy_data = n umpy.zeros(length)976 q_data = n umpy.zeros(length)977 mask = n umpy.zeros(length)972 data = np.zeros(length) 973 err_data = np.zeros(length) 974 qx_data = np.zeros(length) 975 qy_data = np.zeros(length) 976 q_data = np.zeros(length) 977 mask = np.zeros(length) 978 978 dqx_data = None 979 979 dqy_data = None … … 1031 1031 if other.err_data == None or \ 1032 1032 (len(other.err_data) != len(other.data)): 1033 err_other = n umpy.zeros(len(other.data))1033 err_other = np.zeros(len(other.data)) 1034 1034 1035 1035 # Check that we have errors, otherwise create zero vector … … 1037 1037 if self.err_data == None or \ 1038 1038 (len(self.err_data) != len(self.data)): 1039 err = n umpy.zeros(len(other.data))1039 err = np.zeros(len(other.data)) 1040 1040 return err, err_other 1041 1041 … … 1049 1049 # First, check the data compatibility 1050 1050 dy, dy_other = self._validity_check(other) 1051 result = self.clone_without_data(n umpy.size(self.data))1051 result = self.clone_without_data(np.size(self.data)) 1052 1052 if self.dqx_data == None or self.dqy_data == None: 1053 1053 result.dqx_data = None 1054 1054 result.dqy_data = None 1055 1055 else: 1056 result.dqx_data = n umpy.zeros(len(self.data))1057 result.dqy_data = n umpy.zeros(len(self.data))1058 for i in range(n umpy.size(self.data)):1056 result.dqx_data = np.zeros(len(self.data)) 1057 result.dqy_data = np.zeros(len(self.data)) 1058 for i in range(np.size(self.data)): 1059 1059 result.data[i] = self.data[i] 1060 1060 if self.err_data is not None and \ 1061 numpy.size(self.data) == numpy.size(self.err_data):1061 np.size(self.data) == np.size(self.err_data): 1062 1062 result.err_data[i] = self.err_data[i] 1063 1063 if self.dqx_data is not None: … … 1118 1118 # First, check the data compatibility 1119 1119 self._validity_check_union(other) 1120 result = self.clone_without_data(n umpy.size(self.data) + \1121 n umpy.size(other.data))1120 result = self.clone_without_data(np.size(self.data) + \ 1121 np.size(other.data)) 1122 1122 result.xmin = self.xmin 1123 1123 result.xmax = self.xmax … … 1129 1129 result.dqy_data = None 1130 1130 else: 1131 result.dqx_data = n umpy.zeros(len(self.data) + \1132 numpy.size(other.data))1133 result.dqy_data = n umpy.zeros(len(self.data) + \1134 numpy.size(other.data))1135 1136 result.data = n umpy.append(self.data, other.data)1137 result.qx_data = n umpy.append(self.qx_data, other.qx_data)1138 result.qy_data = n umpy.append(self.qy_data, other.qy_data)1139 result.q_data = n umpy.append(self.q_data, other.q_data)1140 result.mask = n umpy.append(self.mask, other.mask)1131 result.dqx_data = np.zeros(len(self.data) + \ 1132 np.size(other.data)) 1133 result.dqy_data = np.zeros(len(self.data) + \ 1134 np.size(other.data)) 1135 1136 result.data = np.append(self.data, other.data) 1137 result.qx_data = np.append(self.qx_data, other.qx_data) 1138 result.qy_data = np.append(self.qy_data, other.qy_data) 1139 result.q_data = np.append(self.q_data, other.q_data) 1140 result.mask = np.append(self.mask, other.mask) 1141 1141 if result.err_data is not None: 1142 result.err_data = n umpy.append(self.err_data, other.err_data)1142 result.err_data = np.append(self.err_data, other.err_data) 1143 1143 if self.dqx_data is not None: 1144 result.dqx_data = n umpy.append(self.dqx_data, other.dqx_data)1144 result.dqx_data = np.append(self.dqx_data, other.dqx_data) 1145 1145 if self.dqy_data is not None: 1146 result.dqy_data = n umpy.append(self.dqy_data, other.dqy_data)1146 result.dqy_data = np.append(self.dqy_data, other.dqy_data) 1147 1147 1148 1148 return result -
TabularUnified src/sas/sascalc/dataloader/manipulations.py ¶
rb2b36932 rdd11014 80 80 81 81 """ 82 if data2d.data == None or data2d.x_bins == None or data2d.y_bins ==None:82 if data2d.data is None or data2d.x_bins is None or data2d.y_bins is None: 83 83 raise ValueError, "Can't convert this data: data=None..." 84 84 new_x = numpy.tile(data2d.x_bins, (len(data2d.y_bins), 1)) … … 90 90 qy_data = new_y.flatten() 91 91 q_data = numpy.sqrt(qx_data * qx_data + qy_data * qy_data) 92 if data2d.err_data ==None or numpy.any(data2d.err_data <= 0):92 if data2d.err_data is None or numpy.any(data2d.err_data <= 0): 93 93 new_err_data = numpy.sqrt(numpy.abs(new_data)) 94 94 else: -
TabularUnified src/sas/sascalc/dataloader/readers/IgorReader.py ¶
rb699768 rdd11014 13 13 ############################################################################# 14 14 import os 15 import numpy 16 import math 17 #import logging 15 18 16 from sas.sascalc.dataloader.data_info import Data2D 19 17 from sas.sascalc.dataloader.data_info import Detector 20 18 from sas.sascalc.dataloader.manipulations import reader2D_converter 19 import numpy as np 21 20 22 21 # Look for unit converter … … 40 39 """ Read file """ 41 40 if not os.path.isfile(filename): 42 raise ValueError, \ 43 "Specified file %s is not a regular file" % filename 44 45 # Read file 46 f = open(filename, 'r') 47 buf = f.read() 48 49 # Instantiate data object 41 raise ValueError("Specified file %s is not a regular " 42 "file" % filename) 43 50 44 output = Data2D() 45 51 46 output.filename = os.path.basename(filename) 52 47 detector = Detector() 53 if len(output.detector) > 0:54 print str(output.detector[0])48 if len(output.detector): 49 print(str(output.detector[0])) 55 50 output.detector.append(detector) 56 57 # Get content 58 dataStarted = False 59 60 lines = buf.split('\n') 61 itot = 0 62 x = [] 63 y = [] 64 65 ncounts = 0 66 67 xmin = None 68 xmax = None 69 ymin = None 70 ymax = None 71 72 i_x = 0 73 i_y = -1 74 i_tot_row = 0 75 76 isInfo = False 77 isCenter = False 78 79 data_conv_q = None 80 data_conv_i = None 81 82 if has_converter == True and output.Q_unit != '1/A': 51 52 data_conv_q = data_conv_i = None 53 54 if has_converter and output.Q_unit != '1/A': 83 55 data_conv_q = Converter('1/A') 84 56 # Test it 85 57 data_conv_q(1.0, output.Q_unit) 86 58 87 if has_converter == Trueand output.I_unit != '1/cm':59 if has_converter and output.I_unit != '1/cm': 88 60 data_conv_i = Converter('1/cm') 89 61 # Test it 90 62 data_conv_i(1.0, output.I_unit) 91 92 for line in lines: 93 94 # Find setup info line 95 if isInfo: 96 isInfo = False 97 line_toks = line.split() 98 # Wavelength in Angstrom 99 try: 100 wavelength = float(line_toks[1]) 101 except: 102 msg = "IgorReader: can't read this file, missing wavelength" 103 raise ValueError, msg 104 105 #Find # of bins in a row assuming the detector is square. 106 if dataStarted == True: 107 try: 108 value = float(line) 109 except: 110 # Found a non-float entry, skip it 111 continue 112 113 # Get total bin number 114 115 i_tot_row += 1 116 i_tot_row = math.ceil(math.sqrt(i_tot_row)) - 1 117 #print "i_tot", i_tot_row 118 size_x = i_tot_row # 192#128 119 size_y = i_tot_row # 192#128 120 output.data = numpy.zeros([size_x, size_y]) 121 output.err_data = numpy.zeros([size_x, size_y]) 122 123 #Read Header and 2D data 124 for line in lines: 125 # Find setup info line 126 if isInfo: 127 isInfo = False 128 line_toks = line.split() 129 # Wavelength in Angstrom 130 try: 131 wavelength = float(line_toks[1]) 132 except: 133 msg = "IgorReader: can't read this file, missing wavelength" 134 raise ValueError, msg 135 # Distance in meters 136 try: 137 distance = float(line_toks[3]) 138 except: 139 msg = "IgorReader: can't read this file, missing distance" 140 raise ValueError, msg 141 142 # Distance in meters 143 try: 144 transmission = float(line_toks[4]) 145 except: 146 msg = "IgorReader: can't read this file, " 147 msg += "missing transmission" 148 raise ValueError, msg 149 150 if line.count("LAMBDA") > 0: 151 isInfo = True 152 153 # Find center info line 154 if isCenter: 155 isCenter = False 156 line_toks = line.split() 157 158 # Center in bin number: Must substrate 1 because 159 #the index starts from 1 160 center_x = float(line_toks[0]) - 1 161 center_y = float(line_toks[1]) - 1 162 163 if line.count("BCENT") > 0: 164 isCenter = True 165 166 # Find data start 167 if line.count("***")>0: 168 dataStarted = True 169 170 # Check that we have all the info 171 if wavelength == None \ 172 or distance == None \ 173 or center_x == None \ 174 or center_y == None: 175 msg = "IgorReader:Missing information in data file" 176 raise ValueError, msg 177 178 if dataStarted == True: 179 try: 180 value = float(line) 181 except: 182 # Found a non-float entry, skip it 183 continue 184 185 # Get bin number 186 if math.fmod(itot, i_tot_row) == 0: 187 i_x = 0 188 i_y += 1 189 else: 190 i_x += 1 191 192 output.data[i_y][i_x] = value 193 ncounts += 1 194 195 # Det 640 x 640 mm 196 # Q = 4pi/lambda sin(theta/2) 197 # Bin size is 0.5 cm 198 #REmoved +1 from theta = (i_x-center_x+1)*0.5 / distance 199 # / 100.0 and 200 #REmoved +1 from theta = (i_y-center_y+1)*0.5 / 201 # distance / 100.0 202 #ToDo: Need complete check if the following 203 # covert process is consistent with fitting.py. 204 theta = (i_x - center_x) * 0.5 / distance / 100.0 205 qx = 4.0 * math.pi / wavelength * math.sin(theta/2.0) 206 207 if has_converter == True and output.Q_unit != '1/A': 208 qx = data_conv_q(qx, units=output.Q_unit) 209 210 if xmin == None or qx < xmin: 211 xmin = qx 212 if xmax == None or qx > xmax: 213 xmax = qx 214 215 theta = (i_y - center_y) * 0.5 / distance / 100.0 216 qy = 4.0 * math.pi / wavelength * math.sin(theta / 2.0) 217 218 if has_converter == True and output.Q_unit != '1/A': 219 qy = data_conv_q(qy, units=output.Q_unit) 220 221 if ymin == None or qy < ymin: 222 ymin = qy 223 if ymax == None or qy > ymax: 224 ymax = qy 225 226 if not qx in x: 227 x.append(qx) 228 if not qy in y: 229 y.append(qy) 230 231 itot += 1 232 233 63 64 data_row = 0 65 wavelength = distance = center_x = center_y = None 66 dataStarted = isInfo = isCenter = False 67 68 with open(filename, 'r') as f: 69 for line in f: 70 data_row += 1 71 # Find setup info line 72 if isInfo: 73 isInfo = False 74 line_toks = line.split() 75 # Wavelength in Angstrom 76 try: 77 wavelength = float(line_toks[1]) 78 except ValueError: 79 msg = "IgorReader: can't read this file, missing wavelength" 80 raise ValueError(msg) 81 # Distance in meters 82 try: 83 distance = float(line_toks[3]) 84 except ValueError: 85 msg = "IgorReader: can't read this file, missing distance" 86 raise ValueError(msg) 87 88 # Distance in meters 89 try: 90 transmission = float(line_toks[4]) 91 except: 92 msg = "IgorReader: can't read this file, " 93 msg += "missing transmission" 94 raise ValueError(msg) 95 96 if line.count("LAMBDA"): 97 isInfo = True 98 99 # Find center info line 100 if isCenter: 101 isCenter = False 102 line_toks = line.split() 103 104 # Center in bin number: Must subtract 1 because 105 # the index starts from 1 106 center_x = float(line_toks[0]) - 1 107 center_y = float(line_toks[1]) - 1 108 109 if line.count("BCENT"): 110 isCenter = True 111 112 # Find data start 113 if line.count("***"): 114 # now have to continue to blank line 115 dataStarted = True 116 117 # Check that we have all the info 118 if (wavelength is None 119 or distance is None 120 or center_x is None 121 or center_y is None): 122 msg = "IgorReader:Missing information in data file" 123 raise ValueError(msg) 124 125 if dataStarted: 126 if len(line.rstrip()): 127 continue 128 else: 129 break 130 131 # The data is loaded in row major order (last index changing most 132 # rapidly). However, the original data is in column major order (first 133 # index changing most rapidly). The swap to column major order is done 134 # in reader2D_converter at the end of this method. 135 data = np.loadtxt(filename, skiprows=data_row) 136 size_x = size_y = int(np.rint(np.sqrt(data.size))) 137 output.data = np.reshape(data, (size_x, size_y)) 138 output.err_data = np.zeros_like(output.data) 139 140 # Det 640 x 640 mm 141 # Q = 4 * pi/lambda * sin(theta/2) 142 # Bin size is 0.5 cm 143 # Removed +1 from theta = (i_x - center_x + 1)*0.5 / distance 144 # / 100.0 and 145 # Removed +1 from theta = (i_y - center_y + 1)*0.5 / 146 # distance / 100.0 147 # ToDo: Need complete check if the following 148 # convert process is consistent with fitting.py. 149 150 # calculate qx, qy bin centers of each pixel in the image 151 theta = (np.arange(size_x) - center_x) * 0.5 / distance / 100. 152 qx = 4 * np.pi / wavelength * np.sin(theta/2) 153 154 theta = (np.arange(size_y) - center_y) * 0.5 / distance / 100. 155 qy = 4 * np.pi / wavelength * np.sin(theta/2) 156 157 if has_converter and output.Q_unit != '1/A': 158 qx = data_conv_q(qx, units=output.Q_unit) 159 qy = data_conv_q(qx, units=output.Q_unit) 160 161 xmax = np.max(qx) 162 xmin = np.min(qx) 163 ymax = np.max(qy) 164 ymin = np.min(qy) 165 166 # calculate edge offset in q. 234 167 theta = 0.25 / distance / 100.0 235 xstep = 4.0 * math.pi / wavelength * math.sin(theta / 2.0)168 xstep = 4.0 * np.pi / wavelength * np.sin(theta / 2.0) 236 169 237 170 theta = 0.25 / distance / 100.0 238 ystep = 4.0 * math.pi/ wavelength * math.sin(theta / 2.0)171 ystep = 4.0 * np.pi/ wavelength * np.sin(theta / 2.0) 239 172 240 173 # Store all data ###################################### 241 174 # Store wavelength 242 if has_converter == Trueand output.source.wavelength_unit != 'A':175 if has_converter and output.source.wavelength_unit != 'A': 243 176 conv = Converter('A') 244 177 wavelength = conv(wavelength, units=output.source.wavelength_unit) … … 246 179 247 180 # Store distance 248 if has_converter == Trueand detector.distance_unit != 'm':181 if has_converter and detector.distance_unit != 'm': 249 182 conv = Converter('m') 250 183 distance = conv(distance, units=detector.distance_unit) … … 254 187 output.sample.transmission = transmission 255 188 256 # Store pixel size 189 # Store pixel size (mm) 257 190 pixel = 5.0 258 if has_converter == Trueand detector.pixel_size_unit != 'mm':191 if has_converter and detector.pixel_size_unit != 'mm': 259 192 conv = Converter('mm') 260 193 pixel = conv(pixel, units=detector.pixel_size_unit) … … 267 200 268 201 # Store limits of the image (2D array) 269 xmin = xmin -xstep / 2.0270 xmax = xmax +xstep / 2.0271 ymin = ymin -ystep / 2.0272 ymax = ymax +ystep / 2.0273 if has_converter == Trueand output.Q_unit != '1/A':202 xmin -= xstep / 2.0 203 xmax += xstep / 2.0 204 ymin -= ystep / 2.0 205 ymax += ystep / 2.0 206 if has_converter and output.Q_unit != '1/A': 274 207 xmin = data_conv_q(xmin, units=output.Q_unit) 275 208 xmax = data_conv_q(xmax, units=output.Q_unit) … … 282 215 283 216 # Store x and y axis bin centers 284 output.x_bins = x285 output.y_bins = y217 output.x_bins = qx.tolist() 218 output.y_bins = qy.tolist() 286 219 287 220 # Units -
TabularUnified src/sas/sascalc/dataloader/readers/abs_reader.py ¶
rb699768 r9a5097c 9 9 ###################################################################### 10 10 11 import numpy 11 import numpy as np 12 12 import os 13 13 from sas.sascalc.dataloader.data_info import Data1D … … 53 53 buff = input_f.read() 54 54 lines = buff.split('\n') 55 x = n umpy.zeros(0)56 y = n umpy.zeros(0)57 dy = n umpy.zeros(0)58 dx = n umpy.zeros(0)55 x = np.zeros(0) 56 y = np.zeros(0) 57 dy = np.zeros(0) 58 dx = np.zeros(0) 59 59 output = Data1D(x, y, dy=dy, dx=dx) 60 60 detector = Detector() … … 204 204 _dy = data_conv_i(_dy, units=output.y_unit) 205 205 206 x = n umpy.append(x, _x)207 y = n umpy.append(y, _y)208 dy = n umpy.append(dy, _dy)209 dx = n umpy.append(dx, _dx)206 x = np.append(x, _x) 207 y = np.append(y, _y) 208 dy = np.append(dy, _dy) 209 dx = np.append(dx, _dx) 210 210 211 211 except: -
TabularUnified src/sas/sascalc/dataloader/readers/ascii_reader.py ¶
rd2471870 r9a5097c 14 14 15 15 16 import numpy 16 import numpy as np 17 17 import os 18 18 from sas.sascalc.dataloader.data_info import Data1D … … 69 69 70 70 # Arrays for data storage 71 tx = n umpy.zeros(0)72 ty = n umpy.zeros(0)73 tdy = n umpy.zeros(0)74 tdx = n umpy.zeros(0)71 tx = np.zeros(0) 72 ty = np.zeros(0) 73 tdy = np.zeros(0) 74 tdx = np.zeros(0) 75 75 76 76 # The first good line of data will define whether … … 140 140 is_data == False: 141 141 try: 142 tx = n umpy.zeros(0)143 ty = n umpy.zeros(0)144 tdy = n umpy.zeros(0)145 tdx = n umpy.zeros(0)142 tx = np.zeros(0) 143 ty = np.zeros(0) 144 tdy = np.zeros(0) 145 tdx = np.zeros(0) 146 146 except: 147 147 pass 148 148 149 149 if has_error_dy == True: 150 tdy = n umpy.append(tdy, _dy)150 tdy = np.append(tdy, _dy) 151 151 if has_error_dx == True: 152 tdx = n umpy.append(tdx, _dx)153 tx = n umpy.append(tx, _x)154 ty = n umpy.append(ty, _y)152 tdx = np.append(tdx, _dx) 153 tx = np.append(tx, _x) 154 ty = np.append(ty, _y) 155 155 156 156 #To remember the # of columns on the current line … … 188 188 #Let's re-order the data to make cal. 189 189 # curve look better some cases 190 ind = n umpy.lexsort((ty, tx))191 x = n umpy.zeros(len(tx))192 y = n umpy.zeros(len(ty))193 dy = n umpy.zeros(len(tdy))194 dx = n umpy.zeros(len(tdx))190 ind = np.lexsort((ty, tx)) 191 x = np.zeros(len(tx)) 192 y = np.zeros(len(ty)) 193 dy = np.zeros(len(tdy)) 194 dx = np.zeros(len(tdx)) 195 195 output = Data1D(x, y, dy=dy, dx=dx) 196 196 self.filename = output.filename = basename … … 212 212 output.y = y[x != 0] 213 213 output.dy = dy[x != 0] if has_error_dy == True\ 214 else n umpy.zeros(len(output.y))214 else np.zeros(len(output.y)) 215 215 output.dx = dx[x != 0] if has_error_dx == True\ 216 else n umpy.zeros(len(output.x))216 else np.zeros(len(output.x)) 217 217 218 218 output.xaxis("\\rm{Q}", 'A^{-1}') -
TabularUnified src/sas/sascalc/dataloader/readers/cansas_reader.py ¶
rc221349 r8434365 930 930 self._write_data(datainfo, entry_node) 931 931 # Transmission Spectrum Info 932 self._write_trans_spectrum(datainfo, entry_node) 932 # TODO: fix the writer to linearize all data, including T_spectrum 933 # self._write_trans_spectrum(datainfo, entry_node) 933 934 # Sample info 934 935 self._write_sample_info(datainfo, entry_node) -
TabularUnified src/sas/sascalc/dataloader/readers/cansas_reader_HDF5.py ¶
rd0764bf rc94280c 9 9 import sys 10 10 11 from sas.sascalc.dataloader.data_info import plottable_1D, plottable_2D, Data1D, Data2D, DataInfo, Process, Aperture 12 from sas.sascalc.dataloader.data_info import Collimation, TransmissionSpectrum, Detector 11 from sas.sascalc.dataloader.data_info import plottable_1D, plottable_2D,\ 12 Data1D, Data2D, DataInfo, Process, Aperture, Collimation, \ 13 TransmissionSpectrum, Detector 13 14 from sas.sascalc.dataloader.data_info import combine_data_info_with_plottable 14 15 15 16 16 17 17 class Reader(): 18 18 """ 19 A class for reading in CanSAS v2.0 data files. The existing iteration opens Mantid generated HDF5 formatted files 20 with file extension .h5/.H5. Any number of data sets may be present within the file and any dimensionality of data 21 may be used. Currently 1D and 2D SAS data sets are supported, but future implementations will include 1D and 2D 22 SESANS data. 23 24 Any number of SASdata sets may be present in a SASentry and the data within can be either 1D I(Q) or 2D I(Qx, Qy). 19 A class for reading in CanSAS v2.0 data files. The existing iteration opens 20 Mantid generated HDF5 formatted files with file extension .h5/.H5. Any 21 number of data sets may be present within the file and any dimensionality 22 of data may be used. Currently 1D and 2D SAS data sets are supported, but 23 future implementations will include 1D and 2D SESANS data. 24 25 Any number of SASdata sets may be present in a SASentry and the data within 26 can be either 1D I(Q) or 2D I(Qx, Qy). 25 27 26 28 Also supports reading NXcanSAS formatted HDF5 files … … 30 32 """ 31 33 32 # #CanSAS version34 # CanSAS version 33 35 cansas_version = 2.0 34 # #Logged warnings or messages36 # Logged warnings or messages 35 37 logging = None 36 # #List of errors for the current data set38 # List of errors for the current data set 37 39 errors = None 38 # #Raw file contents to be processed40 # Raw file contents to be processed 39 41 raw_data = None 40 # #Data info currently being read in42 # Data info currently being read in 41 43 current_datainfo = None 42 # #SASdata set currently being read in44 # SASdata set currently being read in 43 45 current_dataset = None 44 # #List of plottable1D objects that should be linked to the current_datainfo46 # List of plottable1D objects that should be linked to the current_datainfo 45 47 data1d = None 46 # #List of plottable2D objects that should be linked to the current_datainfo48 # List of plottable2D objects that should be linked to the current_datainfo 47 49 data2d = None 48 # #Data type name50 # Data type name 49 51 type_name = "CanSAS 2.0" 50 # #Wildcards52 # Wildcards 51 53 type = ["CanSAS 2.0 HDF5 Files (*.h5)|*.h5"] 52 # #List of allowed extensions54 # List of allowed extensions 53 55 ext = ['.h5', '.H5'] 54 # #Flag to bypass extension check56 # Flag to bypass extension check 55 57 allow_all = True 56 # #List of files to return58 # List of files to return 57 59 output = None 58 60 … … 64 66 :return: List of Data1D/2D objects and/or a list of errors. 65 67 """ 66 # # Reinitialize the classwhen loading a new data file to reset all class variables68 # Reinitialize when loading a new data file to reset all class variables 67 69 self.reset_class_variables() 68 # #Check that the file exists70 # Check that the file exists 69 71 if os.path.isfile(filename): 70 72 basename = os.path.basename(filename) … … 72 74 # If the file type is not allowed, return empty list 73 75 if extension in self.ext or self.allow_all: 74 # #Load the data file76 # Load the data file 75 77 self.raw_data = h5py.File(filename, 'r') 76 # #Read in all child elements of top level SASroot78 # Read in all child elements of top level SASroot 77 79 self.read_children(self.raw_data, []) 78 # #Add the last data set to the list of outputs80 # Add the last data set to the list of outputs 79 81 self.add_data_set() 80 # #Close the data file82 # Close the data file 81 83 self.raw_data.close() 82 # #Return data set(s)84 # Return data set(s) 83 85 return self.output 84 86 … … 110 112 """ 111 113 112 # #Loop through each element of the parent and process accordingly114 # Loop through each element of the parent and process accordingly 113 115 for key in data.keys(): 114 # #Get all information for the current key116 # Get all information for the current key 115 117 value = data.get(key) 116 118 if value.attrs.get(u'canSAS_class') is not None: … … 126 128 self.parent_class = class_name 127 129 parent_list.append(key) 128 ## If this is a new sasentry, store the current data sets and create a fresh Data1D/2D object 130 # If a new sasentry, store the current data sets and create 131 # a fresh Data1D/2D object 129 132 if class_prog.match(u'SASentry'): 130 133 self.add_data_set(key) 131 134 elif class_prog.match(u'SASdata'): 132 135 self._initialize_new_data_set(parent_list) 133 # #Recursion step to access data within the group136 # Recursion step to access data within the group 134 137 self.read_children(value, parent_list) 135 138 self.add_intermediate() … … 137 140 138 141 elif isinstance(value, h5py.Dataset): 139 # #If this is a dataset, store the data appropriately142 # If this is a dataset, store the data appropriately 140 143 data_set = data[key][:] 141 144 unit = self._get_unit(value) 142 145 143 # #I and Q Data146 # I and Q Data 144 147 if key == u'I': 145 if type(self.current_dataset) is plottable_2D:148 if isinstance(self.current_dataset, plottable_2D): 146 149 self.current_dataset.data = data_set 147 150 self.current_dataset.zaxis("Intensity", unit) … … 151 154 continue 152 155 elif key == u'Idev': 153 if type(self.current_dataset) is plottable_2D:156 if isinstance(self.current_dataset, plottable_2D): 154 157 self.current_dataset.err_data = data_set.flatten() 155 158 else: … … 158 161 elif key == u'Q': 159 162 self.current_dataset.xaxis("Q", unit) 160 if type(self.current_dataset) is plottable_2D:163 if isinstance(self.current_dataset, plottable_2D): 161 164 self.current_dataset.q = data_set.flatten() 162 165 else: … … 166 169 self.current_dataset.dx = data_set.flatten() 167 170 continue 171 elif key == u'dQw': 172 self.current_dataset.dxw = data_set.flatten() 173 continue 174 elif key == u'dQl': 175 self.current_dataset.dxl = data_set.flatten() 176 continue 168 177 elif key == u'Qy': 169 178 self.current_dataset.yaxis("Q_y", unit) … … 183 192 self.current_dataset.mask = data_set.flatten() 184 193 continue 194 # Transmission Spectrum 195 elif (key == u'T' 196 and self.parent_class == u'SAStransmission_spectrum'): 197 self.trans_spectrum.transmission = data_set.flatten() 198 continue 199 elif (key == u'Tdev' 200 and self.parent_class == u'SAStransmission_spectrum'): 201 self.trans_spectrum.transmission_deviation = \ 202 data_set.flatten() 203 continue 204 elif (key == u'lambda' 205 and self.parent_class == u'SAStransmission_spectrum'): 206 self.trans_spectrum.wavelength = data_set.flatten() 207 continue 185 208 186 209 for data_point in data_set: 187 # #Top Level Meta Data210 # Top Level Meta Data 188 211 if key == u'definition': 189 212 self.current_datainfo.meta_data['reader'] = data_point … … 201 224 self.current_datainfo.notes.append(data_point) 202 225 203 ## Sample Information 204 elif key == u'Title' and self.parent_class == u'SASsample': # CanSAS 2.0 format 226 # Sample Information 227 # CanSAS 2.0 format 228 elif key == u'Title' and self.parent_class == u'SASsample': 205 229 self.current_datainfo.sample.name = data_point 206 elif key == u'ID' and self.parent_class == u'SASsample': # NXcanSAS format 230 # NXcanSAS format 231 elif key == u'name' and self.parent_class == u'SASsample': 207 232 self.current_datainfo.sample.name = data_point 208 elif key == u'thickness' and self.parent_class == u'SASsample': 233 # NXcanSAS format 234 elif key == u'ID' and self.parent_class == u'SASsample': 235 self.current_datainfo.sample.name = data_point 236 elif (key == u'thickness' 237 and self.parent_class == u'SASsample'): 209 238 self.current_datainfo.sample.thickness = data_point 210 elif key == u'temperature' and self.parent_class == u'SASsample': 239 elif (key == u'temperature' 240 and self.parent_class == u'SASsample'): 211 241 self.current_datainfo.sample.temperature = data_point 212 elif key == u'transmission' and self.parent_class == u'SASsample': 242 elif (key == u'transmission' 243 and self.parent_class == u'SASsample'): 213 244 self.current_datainfo.sample.transmission = data_point 214 elif key == u'x_position' and self.parent_class == u'SASsample': 245 elif (key == u'x_position' 246 and self.parent_class == u'SASsample'): 215 247 self.current_datainfo.sample.position.x = data_point 216 elif key == u'y_position' and self.parent_class == u'SASsample': 248 elif (key == u'y_position' 249 and self.parent_class == u'SASsample'): 217 250 self.current_datainfo.sample.position.y = data_point 218 elif key == u'p olar_angle' and self.parent_class == u'SASsample':251 elif key == u'pitch' and self.parent_class == u'SASsample': 219 252 self.current_datainfo.sample.orientation.x = data_point 220 elif key == u'azimuthal_angle' and self.parent_class == u'SASsample': 253 elif key == u'yaw' and self.parent_class == u'SASsample': 254 self.current_datainfo.sample.orientation.y = data_point 255 elif key == u'roll' and self.parent_class == u'SASsample': 221 256 self.current_datainfo.sample.orientation.z = data_point 222 elif key == u'details' and self.parent_class == u'SASsample': 257 elif (key == u'details' 258 and self.parent_class == u'SASsample'): 223 259 self.current_datainfo.sample.details.append(data_point) 224 260 225 ## Instrumental Information 226 elif key == u'name' and self.parent_class == u'SASinstrument': 261 # Instrumental Information 262 elif (key == u'name' 263 and self.parent_class == u'SASinstrument'): 227 264 self.current_datainfo.instrument = data_point 228 265 elif key == u'name' and self.parent_class == u'SASdetector': … … 231 268 self.detector.distance = float(data_point) 232 269 self.detector.distance_unit = unit 233 elif key == u'slit_length' and self.parent_class == u'SASdetector': 270 elif (key == u'slit_length' 271 and self.parent_class == u'SASdetector'): 234 272 self.detector.slit_length = float(data_point) 235 273 self.detector.slit_length_unit = unit 236 elif key == u'x_position' and self.parent_class == u'SASdetector': 274 elif (key == u'x_position' 275 and self.parent_class == u'SASdetector'): 237 276 self.detector.offset.x = float(data_point) 238 277 self.detector.offset_unit = unit 239 elif key == u'y_position' and self.parent_class == u'SASdetector': 278 elif (key == u'y_position' 279 and self.parent_class == u'SASdetector'): 240 280 self.detector.offset.y = float(data_point) 241 281 self.detector.offset_unit = unit 242 elif key == u'polar_angle' and self.parent_class == u'SASdetector': 282 elif (key == u'pitch' 283 and self.parent_class == u'SASdetector'): 243 284 self.detector.orientation.x = float(data_point) 244 285 self.detector.orientation_unit = unit 245 elif key == u' azimuthal_angle' and self.parent_class == u'SASdetector':286 elif key == u'roll' and self.parent_class == u'SASdetector': 246 287 self.detector.orientation.z = float(data_point) 247 288 self.detector.orientation_unit = unit 248 elif key == u'beam_center_x' and self.parent_class == u'SASdetector': 289 elif key == u'yaw' and self.parent_class == u'SASdetector': 290 self.detector.orientation.y = float(data_point) 291 self.detector.orientation_unit = unit 292 elif (key == u'beam_center_x' 293 and self.parent_class == u'SASdetector'): 249 294 self.detector.beam_center.x = float(data_point) 250 295 self.detector.beam_center_unit = unit 251 elif key == u'beam_center_y' and self.parent_class == u'SASdetector': 296 elif (key == u'beam_center_y' 297 and self.parent_class == u'SASdetector'): 252 298 self.detector.beam_center.y = float(data_point) 253 299 self.detector.beam_center_unit = unit 254 elif key == u'x_pixel_size' and self.parent_class == u'SASdetector': 300 elif (key == u'x_pixel_size' 301 and self.parent_class == u'SASdetector'): 255 302 self.detector.pixel_size.x = float(data_point) 256 303 self.detector.pixel_size_unit = unit 257 elif key == u'y_pixel_size' and self.parent_class == u'SASdetector': 304 elif (key == u'y_pixel_size' 305 and self.parent_class == u'SASdetector'): 258 306 self.detector.pixel_size.y = float(data_point) 259 307 self.detector.pixel_size_unit = unit 260 elif key == u'SSD' and self.parent_class == u'SAScollimation': 308 elif (key == u'distance' 309 and self.parent_class == u'SAScollimation'): 261 310 self.collimation.length = data_point 262 311 self.collimation.length_unit = unit 263 elif key == u'name' and self.parent_class == u'SAScollimation': 312 elif (key == u'name' 313 and self.parent_class == u'SAScollimation'): 264 314 self.collimation.name = data_point 265 266 ## Process Information 267 elif key == u'name' and self.parent_class == u'SASprocess': 315 elif (key == u'shape' 316 and self.parent_class == u'SASaperture'): 317 self.aperture.shape = data_point 318 elif (key == u'x_gap' 319 and self.parent_class == u'SASaperture'): 320 self.aperture.size.x = data_point 321 elif (key == u'y_gap' 322 and self.parent_class == u'SASaperture'): 323 self.aperture.size.y = data_point 324 325 # Process Information 326 elif (key == u'Title' 327 and self.parent_class == u'SASprocess'): # CanSAS 2.0 268 328 self.process.name = data_point 269 elif key == u'Title' and self.parent_class == u'SASprocess': # CanSAS 2.0 format 329 elif (key == u'name' 330 and self.parent_class == u'SASprocess'): # NXcanSAS 270 331 self.process.name = data_point 271 elif key == u'name' and self.parent_class == u'SASprocess': # NXcanSAS format 272 self.process.name = data_point 273 elif key == u'description' and self.parent_class == u'SASprocess': 332 elif (key == u'description' 333 and self.parent_class == u'SASprocess'): 274 334 self.process.description = data_point 275 335 elif key == u'date' and self.parent_class == u'SASprocess': 276 336 self.process.date = data_point 337 elif key == u'term' and self.parent_class == u'SASprocess': 338 self.process.term = data_point 277 339 elif self.parent_class == u'SASprocess': 278 340 self.process.notes.append(data_point) 279 341 280 ## Transmission Spectrum 281 elif key == u'T' and self.parent_class == u'SAStransmission_spectrum': 282 self.trans_spectrum.transmission.append(data_point) 283 elif key == u'Tdev' and self.parent_class == u'SAStransmission_spectrum': 284 self.trans_spectrum.transmission_deviation.append(data_point) 285 elif key == u'lambda' and self.parent_class == u'SAStransmission_spectrum': 286 self.trans_spectrum.wavelength.append(data_point) 287 288 ## Source 289 elif key == u'wavelength' and self.parent_class == u'SASdata': 342 # Source 343 elif (key == u'wavelength' 344 and self.parent_class == u'SASdata'): 290 345 self.current_datainfo.source.wavelength = data_point 291 346 self.current_datainfo.source.wavelength_unit = unit 292 elif key == u'incident_wavelength' and self.parent_class == u'SASsource': 347 elif (key == u'incident_wavelength' 348 and self.parent_class == 'SASsource'): 293 349 self.current_datainfo.source.wavelength = data_point 294 350 self.current_datainfo.source.wavelength_unit = unit 295 elif key == u'wavelength_max' and self.parent_class == u'SASsource': 351 elif (key == u'wavelength_max' 352 and self.parent_class == u'SASsource'): 296 353 self.current_datainfo.source.wavelength_max = data_point 297 354 self.current_datainfo.source.wavelength_max_unit = unit 298 elif key == u'wavelength_min' and self.parent_class == u'SASsource': 355 elif (key == u'wavelength_min' 356 and self.parent_class == u'SASsource'): 299 357 self.current_datainfo.source.wavelength_min = data_point 300 358 self.current_datainfo.source.wavelength_min_unit = unit 301 elif key == u'wavelength_spread' and self.parent_class == u'SASsource': 302 self.current_datainfo.source.wavelength_spread = data_point 303 self.current_datainfo.source.wavelength_spread_unit = unit 304 elif key == u'beam_size_x' and self.parent_class == u'SASsource': 359 elif (key == u'incident_wavelength_spread' 360 and self.parent_class == u'SASsource'): 361 self.current_datainfo.source.wavelength_spread = \ 362 data_point 363 self.current_datainfo.source.wavelength_spread_unit = \ 364 unit 365 elif (key == u'beam_size_x' 366 and self.parent_class == u'SASsource'): 305 367 self.current_datainfo.source.beam_size.x = data_point 306 368 self.current_datainfo.source.beam_size_unit = unit 307 elif key == u'beam_size_y' and self.parent_class == u'SASsource': 369 elif (key == u'beam_size_y' 370 and self.parent_class == u'SASsource'): 308 371 self.current_datainfo.source.beam_size.y = data_point 309 372 self.current_datainfo.source.beam_size_unit = unit 310 elif key == u'beam_shape' and self.parent_class == u'SASsource': 373 elif (key == u'beam_shape' 374 and self.parent_class == u'SASsource'): 311 375 self.current_datainfo.source.beam_shape = data_point 312 elif key == u'radiation' and self.parent_class == u'SASsource': 376 elif (key == u'radiation' 377 and self.parent_class == u'SASsource'): 313 378 self.current_datainfo.source.radiation = data_point 314 elif key == u'transmission' and self.parent_class == u'SASdata': 379 elif (key == u'transmission' 380 and self.parent_class == u'SASdata'): 315 381 self.current_datainfo.sample.transmission = data_point 316 382 317 # #Everything else goes in meta_data383 # Everything else goes in meta_data 318 384 else: 319 new_key = self._create_unique_key(self.current_datainfo.meta_data, key) 385 new_key = self._create_unique_key( 386 self.current_datainfo.meta_data, key) 320 387 self.current_datainfo.meta_data[new_key] = data_point 321 388 322 389 else: 323 # #I don't know if this reachable code390 # I don't know if this reachable code 324 391 self.errors.add("ShouldNeverHappenException") 325 392 326 393 def add_intermediate(self): 327 394 """ 328 This method stores any intermediate objects within the final data set after fully reading the set. 329 330 :param parent: The NXclass name for the h5py Group object that just finished being processed 395 This method stores any intermediate objects within the final data set 396 after fully reading the set. 397 398 :param parent: The NXclass name for the h5py Group object that just 399 finished being processed 331 400 """ 332 401 … … 347 416 self.aperture = Aperture() 348 417 elif self.parent_class == u'SASdata': 349 if type(self.current_dataset) is plottable_2D:418 if isinstance(self.current_dataset, plottable_2D): 350 419 self.data2d.append(self.current_dataset) 351 elif type(self.current_dataset) is plottable_1D:420 elif isinstance(self.current_dataset, plottable_1D): 352 421 self.data1d.append(self.current_dataset) 353 422 354 423 def final_data_cleanup(self): 355 424 """ 356 Does some final cleanup and formatting on self.current_datainfo and all data1D and data2D objects and then 357 combines the data and info into Data1D and Data2D objects 358 """ 359 360 ## Type cast data arrays to float64 425 Does some final cleanup and formatting on self.current_datainfo and 426 all data1D and data2D objects and then combines the data and info into 427 Data1D and Data2D objects 428 """ 429 430 # Type cast data arrays to float64 361 431 if len(self.current_datainfo.trans_spectrum) > 0: 362 432 spectrum_list = [] … … 364 434 spectrum.transmission = np.delete(spectrum.transmission, [0]) 365 435 spectrum.transmission = spectrum.transmission.astype(np.float64) 366 spectrum.transmission_deviation = np.delete(spectrum.transmission_deviation, [0]) 367 spectrum.transmission_deviation = spectrum.transmission_deviation.astype(np.float64) 436 spectrum.transmission_deviation = np.delete( 437 spectrum.transmission_deviation, [0]) 438 spectrum.transmission_deviation = \ 439 spectrum.transmission_deviation.astype(np.float64) 368 440 spectrum.wavelength = np.delete(spectrum.wavelength, [0]) 369 441 spectrum.wavelength = spectrum.wavelength.astype(np.float64) … … 372 444 self.current_datainfo.trans_spectrum = spectrum_list 373 445 374 # #Append errors to dataset and reset class errors446 # Append errors to dataset and reset class errors 375 447 self.current_datainfo.errors = self.errors 376 448 self.errors.clear() 377 449 378 # #Combine all plottables with datainfo and append each to output379 # #Type cast data arrays to float64 and find min/max as appropriate450 # Combine all plottables with datainfo and append each to output 451 # Type cast data arrays to float64 and find min/max as appropriate 380 452 for dataset in self.data2d: 381 453 dataset.data = dataset.data.astype(np.float64) … … 397 469 zeros = np.ones(dataset.data.size, dtype=bool) 398 470 try: 399 for i in range 471 for i in range(0, dataset.mask.size - 1): 400 472 zeros[i] = dataset.mask[i] 401 473 except: 402 474 self.errors.add(sys.exc_value) 403 475 dataset.mask = zeros 404 # #Calculate the actual Q matrix476 # Calculate the actual Q matrix 405 477 try: 406 478 if dataset.q_data.size <= 1: 407 dataset.q_data = np.sqrt(dataset.qx_data * dataset.qx_data + dataset.qy_data * dataset.qy_data) 479 dataset.q_data = np.sqrt(dataset.qx_data 480 * dataset.qx_data 481 + dataset.qy_data 482 * dataset.qy_data) 408 483 except: 409 484 dataset.q_data = None … … 415 490 dataset.data = dataset.data.flatten() 416 491 417 final_dataset = combine_data_info_with_plottable(dataset, self.current_datainfo) 492 final_dataset = combine_data_info_with_plottable( 493 dataset, self.current_datainfo) 418 494 self.output.append(final_dataset) 419 495 … … 435 511 if dataset.dy is not None: 436 512 dataset.dy = dataset.dy.astype(np.float64) 437 final_dataset = combine_data_info_with_plottable(dataset, self.current_datainfo) 513 final_dataset = combine_data_info_with_plottable( 514 dataset, self.current_datainfo) 438 515 self.output.append(final_dataset) 439 516 440 517 def add_data_set(self, key=""): 441 518 """ 442 Adds the current_dataset to the list of outputs after preforming final processing on the data and then calls a 443 private method to generate a new data set. 519 Adds the current_dataset to the list of outputs after preforming final 520 processing on the data and then calls a private method to generate a 521 new data set. 444 522 445 523 :param key: NeXus group name for current tree level … … 453 531 454 532 455 def _initialize_new_data_set(self, parent_list = None): 456 """ 457 A private class method to generate a new 1D or 2D data object based on the type of data within the set. 458 Outside methods should call add_data_set() to be sure any existing data is stored properly. 533 def _initialize_new_data_set(self, parent_list=None): 534 """ 535 A private class method to generate a new 1D or 2D data object based on 536 the type of data within the set. Outside methods should call 537 add_data_set() to be sure any existing data is stored properly. 459 538 460 539 :param parent_list: List of names of parent elements … … 473 552 def _find_intermediate(self, parent_list, basename=""): 474 553 """ 475 A private class used to find an entry by either using a direct key or knowing the approximate basename. 476 477 :param parent_list: List of parents to the current level in the HDF5 file 554 A private class used to find an entry by either using a direct key or 555 knowing the approximate basename. 556 557 :param parent_list: List of parents nodes in the HDF5 file 478 558 :param basename: Approximate name of an entry to search for 479 559 :return: … … 486 566 top = top.get(parent) 487 567 for key in top.keys(): 488 if (key_prog.match(key)):568 if key_prog.match(key): 489 569 entry = True 490 570 break … … 516 596 """ 517 597 unit = value.attrs.get(u'units') 518 if unit ==None:598 if unit is None: 519 599 unit = value.attrs.get(u'unit') 520 # #Convert the unit formats600 # Convert the unit formats 521 601 if unit == "1/A": 522 602 unit = "A^{-1}" -
TabularUnified src/sas/sascalc/dataloader/readers/danse_reader.py ¶
rb699768 r9a5097c 15 15 import os 16 16 import sys 17 import numpy 17 import numpy as np 18 18 import logging 19 19 from sas.sascalc.dataloader.data_info import Data2D, Detector … … 79 79 output.detector.append(detector) 80 80 81 output.data = n umpy.zeros([size_x,size_y])82 output.err_data = n umpy.zeros([size_x, size_y])81 output.data = np.zeros([size_x,size_y]) 82 output.err_data = np.zeros([size_x, size_y]) 83 83 84 84 data_conv_q = None -
TabularUnified src/sas/sascalc/dataloader/readers/hfir1d_reader.py ¶
rb699768 r9a5097c 9 9 #copyright 2008, University of Tennessee 10 10 ###################################################################### 11 import numpy 11 import numpy as np 12 12 import os 13 13 from sas.sascalc.dataloader.data_info import Data1D … … 52 52 buff = input_f.read() 53 53 lines = buff.split('\n') 54 x = n umpy.zeros(0)55 y = n umpy.zeros(0)56 dx = n umpy.zeros(0)57 dy = n umpy.zeros(0)54 x = np.zeros(0) 55 y = np.zeros(0) 56 dx = np.zeros(0) 57 dy = np.zeros(0) 58 58 output = Data1D(x, y, dx=dx, dy=dy) 59 59 self.filename = output.filename = basename … … 88 88 _dy = data_conv_i(_dy, units=output.y_unit) 89 89 90 x = n umpy.append(x, _x)91 y = n umpy.append(y, _y)92 dx = n umpy.append(dx, _dx)93 dy = n umpy.append(dy, _dy)90 x = np.append(x, _x) 91 y = np.append(y, _y) 92 dx = np.append(dx, _dx) 93 dy = np.append(dy, _dy) 94 94 except: 95 95 # Couldn't parse this line, skip it -
TabularUnified src/sas/sascalc/dataloader/readers/red2d_reader.py ¶
rb699768 r9a5097c 10 10 ###################################################################### 11 11 import os 12 import numpy 12 import numpy as np 13 13 import math 14 14 from sas.sascalc.dataloader.data_info import Data2D, Detector … … 198 198 break 199 199 # Make numpy array to remove header lines using index 200 lines_array = n umpy.array(lines)200 lines_array = np.array(lines) 201 201 202 202 # index for lines_array 203 lines_index = n umpy.arange(len(lines))203 lines_index = np.arange(len(lines)) 204 204 205 205 # get the data lines … … 225 225 226 226 # numpy array form 227 data_array = n umpy.array(data_list1)227 data_array = np.array(data_list1) 228 228 # Redimesion based on the row_num and col_num, 229 229 #otherwise raise an error. … … 235 235 ## Get the all data: Let's HARDcoding; Todo find better way 236 236 # Defaults 237 dqx_data = n umpy.zeros(0)238 dqy_data = n umpy.zeros(0)239 err_data = n umpy.ones(row_num)240 qz_data = n umpy.zeros(row_num)241 mask = n umpy.ones(row_num, dtype=bool)237 dqx_data = np.zeros(0) 238 dqy_data = np.zeros(0) 239 err_data = np.ones(row_num) 240 qz_data = np.zeros(row_num) 241 mask = np.ones(row_num, dtype=bool) 242 242 # Get from the array 243 243 qx_data = data_point[0] … … 254 254 dqy_data = data_point[(5 + ver)] 255 255 #if col_num > (6 + ver): mask[data_point[(6 + ver)] < 1] = False 256 q_data = n umpy.sqrt(qx_data*qx_data+qy_data*qy_data+qz_data*qz_data)256 q_data = np.sqrt(qx_data*qx_data+qy_data*qy_data+qz_data*qz_data) 257 257 258 258 # Extra protection(it is needed for some data files): … … 262 262 263 263 # Store limits of the image in q space 264 xmin = n umpy.min(qx_data)265 xmax = n umpy.max(qx_data)266 ymin = n umpy.min(qy_data)267 ymax = n umpy.max(qy_data)264 xmin = np.min(qx_data) 265 xmax = np.max(qx_data) 266 ymin = np.min(qy_data) 267 ymax = np.max(qy_data) 268 268 269 269 # units … … 287 287 288 288 # store x and y axis bin centers in q space 289 x_bins = n umpy.arange(xmin, xmax + xstep, xstep)290 y_bins = n umpy.arange(ymin, ymax + ystep, ystep)289 x_bins = np.arange(xmin, xmax + xstep, xstep) 290 y_bins = np.arange(ymin, ymax + ystep, ystep) 291 291 292 292 # get the limits of q values … … 300 300 output.data = data 301 301 if (err_data == 1).all(): 302 output.err_data = n umpy.sqrt(numpy.abs(data))302 output.err_data = np.sqrt(np.abs(data)) 303 303 output.err_data[output.err_data == 0.0] = 1.0 304 304 else: … … 335 335 # tranfer the comp. to cartesian coord. for newer version. 336 336 if ver != 1: 337 diag = n umpy.sqrt(qx_data * qx_data + qy_data * qy_data)337 diag = np.sqrt(qx_data * qx_data + qy_data * qy_data) 338 338 cos_th = qx_data / diag 339 339 sin_th = qy_data / diag 340 output.dqx_data = n umpy.sqrt((dqx_data * cos_th) * \340 output.dqx_data = np.sqrt((dqx_data * cos_th) * \ 341 341 (dqx_data * cos_th) \ 342 342 + (dqy_data * sin_th) * \ 343 343 (dqy_data * sin_th)) 344 output.dqy_data = n umpy.sqrt((dqx_data * sin_th) * \344 output.dqy_data = np.sqrt((dqx_data * sin_th) * \ 345 345 (dqx_data * sin_th) \ 346 346 + (dqy_data * cos_th) * \ -
TabularUnified src/sas/sascalc/dataloader/readers/sesans_reader.py ¶
r7caf3e5 r9a5097c 6 6 Jurrian Bakker 7 7 """ 8 import numpy 8 import numpy as np 9 9 import os 10 10 from sas.sascalc.dataloader.data_info import Data1D … … 60 60 buff = input_f.read() 61 61 lines = buff.splitlines() 62 x = n umpy.zeros(0)63 y = n umpy.zeros(0)64 dy = n umpy.zeros(0)65 lam = n umpy.zeros(0)66 dlam = n umpy.zeros(0)67 dx = n umpy.zeros(0)62 x = np.zeros(0) 63 y = np.zeros(0) 64 dy = np.zeros(0) 65 lam = np.zeros(0) 66 dlam = np.zeros(0) 67 dx = np.zeros(0) 68 68 69 69 #temp. space to sort data 70 tx = n umpy.zeros(0)71 ty = n umpy.zeros(0)72 tdy = n umpy.zeros(0)73 tlam = n umpy.zeros(0)74 tdlam = n umpy.zeros(0)75 tdx = n umpy.zeros(0)70 tx = np.zeros(0) 71 ty = np.zeros(0) 72 tdy = np.zeros(0) 73 tlam = np.zeros(0) 74 tdlam = np.zeros(0) 75 tdx = np.zeros(0) 76 76 output = Data1D(x=x, y=y, lam=lam, dy=dy, dx=dx, dlam=dlam, isSesans=True) 77 77 self.filename = output.filename = basename … … 128 128 129 129 x,y,lam,dy,dx,dlam = [ 130 numpy.asarray(v, 'double')130 np.asarray(v, 'double') 131 131 for v in (x,y,lam,dy,dx,dlam) 132 132 ] -
TabularUnified src/sas/sascalc/dataloader/readers/tiff_reader.py ¶
rb699768 r9a5097c 13 13 import logging 14 14 import os 15 import numpy 15 import numpy as np 16 16 from sas.sascalc.dataloader.data_info import Data2D 17 17 from sas.sascalc.dataloader.manipulations import reader2D_converter … … 56 56 57 57 # Initiazed the output data object 58 output.data = n umpy.zeros([im.size[0], im.size[1]])59 output.err_data = n umpy.zeros([im.size[0], im.size[1]])60 output.mask = n umpy.ones([im.size[0], im.size[1]], dtype=bool)58 output.data = np.zeros([im.size[0], im.size[1]]) 59 output.err_data = np.zeros([im.size[0], im.size[1]]) 60 output.mask = np.ones([im.size[0], im.size[1]], dtype=bool) 61 61 62 62 # Initialize … … 94 94 output.x_bins = x_vals 95 95 output.y_bins = y_vals 96 output.qx_data = n umpy.array(x_vals)97 output.qy_data = n umpy.array(y_vals)96 output.qx_data = np.array(x_vals) 97 output.qy_data = np.array(y_vals) 98 98 output.xmin = 0 99 99 output.xmax = im.size[0] - 1 -
TabularUnified src/sas/sascalc/fit/AbstractFitEngine.py ¶
ra9f579c r9a5097c 4 4 import sys 5 5 import math 6 import numpy 6 import numpy as np 7 7 8 8 from sas.sascalc.dataloader.data_info import Data1D … … 162 162 # constant, or dy data 163 163 if dy is None or dy == [] or dy.all() == 0: 164 self.dy = n umpy.ones(len(y))164 self.dy = np.ones(len(y)) 165 165 else: 166 self.dy = n umpy.asarray(dy).copy()166 self.dy = np.asarray(dy).copy() 167 167 168 168 ## Min Q-value 169 169 #Skip the Q=0 point, especially when y(q=0)=None at x[0]. 170 170 if min(self.x) == 0.0 and self.x[0] == 0 and\ 171 not n umpy.isfinite(self.y[0]):171 not np.isfinite(self.y[0]): 172 172 self.qmin = min(self.x[self.x != 0]) 173 173 else: … … 188 188 # Skip Q=0 point, (especially for y(q=0)=None at x[0]). 189 189 # ToDo: Find better way to do it. 190 if qmin == 0.0 and not n umpy.isfinite(self.y[qmin]):190 if qmin == 0.0 and not np.isfinite(self.y[qmin]): 191 191 self.qmin = min(self.x[self.x != 0]) 192 192 elif qmin != None: … … 239 239 """ 240 240 # Compute theory data f(x) 241 fx = n umpy.zeros(len(self.x))241 fx = np.zeros(len(self.x)) 242 242 fx[self.idx_unsmeared] = fn(self.x[self.idx_unsmeared]) 243 243 … … 247 247 self._last_unsmeared_bin) 248 248 ## Sanity check 249 if n umpy.size(self.dy) != numpy.size(fx):249 if np.size(self.dy) != np.size(fx): 250 250 msg = "FitData1D: invalid error array " 251 msg += "%d <> %d" % (n umpy.shape(self.dy), numpy.size(fx))251 msg += "%d <> %d" % (np.shape(self.dy), np.size(fx)) 252 252 raise RuntimeError, msg 253 253 return (self.y[self.idx] - fx[self.idx]) / self.dy[self.idx], fx[self.idx] … … 300 300 ## new error image for fitting purpose 301 301 if self.err_data == None or self.err_data == []: 302 self.res_err_data = n umpy.ones(len(self.data))302 self.res_err_data = np.ones(len(self.data)) 303 303 else: 304 304 self.res_err_data = copy.deepcopy(self.err_data) 305 305 #self.res_err_data[self.res_err_data==0]=1 306 306 307 self.radius = n umpy.sqrt(self.qx_data**2 + self.qy_data**2)307 self.radius = np.sqrt(self.qx_data**2 + self.qy_data**2) 308 308 309 309 # Note: mask = True: for MASK while mask = False for NOT to mask … … 311 311 (self.radius <= self.qmax)) 312 312 self.idx = (self.idx) & (self.mask) 313 self.idx = (self.idx) & (n umpy.isfinite(self.data))314 self.num_points = n umpy.sum(self.idx)313 self.idx = (self.idx) & (np.isfinite(self.data)) 314 self.num_points = np.sum(self.idx) 315 315 316 316 def set_smearer(self, smearer): … … 334 334 if qmax != None: 335 335 self.qmax = qmax 336 self.radius = n umpy.sqrt(self.qx_data**2 + self.qy_data**2)336 self.radius = np.sqrt(self.qx_data**2 + self.qy_data**2) 337 337 self.idx = ((self.qmin <= self.radius) &\ 338 338 (self.radius <= self.qmax)) 339 339 self.idx = (self.idx) & (self.mask) 340 self.idx = (self.idx) & (n umpy.isfinite(self.data))340 self.idx = (self.idx) & (np.isfinite(self.data)) 341 341 self.idx = (self.idx) & (self.res_err_data != 0) 342 342 … … 351 351 Number of measurement points in data set after masking, etc. 352 352 """ 353 return n umpy.sum(self.idx)353 return np.sum(self.idx) 354 354 355 355 def residuals(self, fn): -
TabularUnified src/sas/sascalc/fit/BumpsFitting.py ¶
r345e7e4 r9a5097c 6 6 import traceback 7 7 8 import numpy 8 import numpy as np 9 9 10 10 from bumps import fitters … … 97 97 try: 98 98 p = history.population_values[0] 99 n,p = len(p), n umpy.sort(p)99 n,p = len(p), np.sort(p) 100 100 QI,Qmid, = int(0.2*n),int(0.5*n) 101 101 self.convergence.append((best, p[0],p[QI],p[Qmid],p[-1-QI],p[-1])) … … 194 194 195 195 def numpoints(self): 196 return n umpy.sum(self.data.idx) # number of fitted points196 return np.sum(self.data.idx) # number of fitted points 197 197 198 198 def nllf(self): 199 return 0.5*n umpy.sum(self.residuals()**2)199 return 0.5*np.sum(self.residuals()**2) 200 200 201 201 def theory(self): … … 295 295 if R.success: 296 296 if result['stderr'] is None: 297 R.stderr = n umpy.NaN*numpy.ones(len(param_list))297 R.stderr = np.NaN*np.ones(len(param_list)) 298 298 else: 299 R.stderr = n umpy.hstack((result['stderr'][fitted_index],300 numpy.NaN*numpy.ones(len(fitness.computed_pars))))301 R.pvec = n umpy.hstack((result['value'][fitted_index],299 R.stderr = np.hstack((result['stderr'][fitted_index], 300 np.NaN*np.ones(len(fitness.computed_pars)))) 301 R.pvec = np.hstack((result['value'][fitted_index], 302 302 [p.value for p in fitness.computed_pars])) 303 R.fitness = n umpy.sum(R.residuals**2)/(fitness.numpoints() - len(fitted_index))303 R.fitness = np.sum(R.residuals**2)/(fitness.numpoints() - len(fitted_index)) 304 304 else: 305 R.stderr = n umpy.NaN*numpy.ones(len(param_list))306 R.pvec = n umpy.asarray( [p.value for p in fitness.fitted_pars+fitness.computed_pars])307 R.fitness = n umpy.NaN305 R.stderr = np.NaN*np.ones(len(param_list)) 306 R.pvec = np.asarray( [p.value for p in fitness.fitted_pars+fitness.computed_pars]) 307 R.fitness = np.NaN 308 308 R.convergence = result['convergence'] 309 309 if result['uncertainty'] is not None: … … 336 336 max_step = steps + options.get('burn', 0) 337 337 pars = [p.name for p in problem._parameters] 338 #x0 = n umpy.asarray([p.value for p in problem._parameters])338 #x0 = np.asarray([p.value for p in problem._parameters]) 339 339 options['monitors'] = [ 340 340 BumpsMonitor(handler, max_step, pars, problem.dof), … … 351 351 errors = [] 352 352 except Exception as exc: 353 best, fbest = None, n umpy.NaN354 errors = [str(exc), traceback. traceback.format_exc()]353 best, fbest = None, np.NaN 354 errors = [str(exc), traceback.format_exc()] 355 355 finally: 356 356 mapper.stop_mapper(fitdriver.mapper) … … 358 358 359 359 convergence_list = options['monitors'][-1].convergence 360 convergence = (2*n umpy.asarray(convergence_list)/problem.dof361 if convergence_list else n umpy.empty((0,1),'d'))360 convergence = (2*np.asarray(convergence_list)/problem.dof 361 if convergence_list else np.empty((0,1),'d')) 362 362 363 363 success = best is not None -
TabularUnified src/sas/sascalc/fit/Loader.py ¶
rb699768 r9a5097c 2 2 #import wx 3 3 #import string 4 import numpy 4 import numpy as np 5 5 6 6 class Load: … … 52 52 self.y.append(y) 53 53 self.dy.append(dy) 54 self.dx = n umpy.zeros(len(self.x))54 self.dx = np.zeros(len(self.x)) 55 55 except: 56 56 print "READ ERROR", line -
TabularUnified src/sas/sascalc/fit/MultiplicationModel.py ¶
r68669da r9a5097c 1 1 import copy 2 2 3 import numpy 3 import numpy as np 4 4 5 5 from sas.sascalc.calculator.BaseComponent import BaseComponent … … 52 52 ## Parameter details [units, min, max] 53 53 self._set_details() 54 self.details['scale_factor'] = ['', 0.0, n umpy.inf]55 self.details['background'] = ['',-n umpy.inf,numpy.inf]54 self.details['scale_factor'] = ['', 0.0, np.inf] 55 self.details['background'] = ['',-np.inf,np.inf] 56 56 57 57 #list of parameter that can be fitted -
TabularUnified src/sas/sascalc/fit/expression.py ¶
rb699768 r9a5097c 271 271 272 272 def test_deps(): 273 import numpy 273 import numpy as np 274 274 275 275 # Null case … … 279 279 _check("test1",[(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)]) 280 280 _check("test1 renumbered",[(6,1),(7,3),(7,4),(6,7),(5,7),(3,2)]) 281 _check("test1 numpy",n umpy.array([(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)]))281 _check("test1 numpy",np.array([(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)])) 282 282 283 283 # No dependencies … … 291 291 292 292 # large test for gross speed check 293 A = n umpy.random.randint(4000,size=(1000,2))293 A = np.random.randint(4000,size=(1000,2)) 294 294 A[:,1] += 4000 # Avoid cycles 295 295 _check("test-large",A) … … 297 297 # depth tests 298 298 k = 200 299 A = n umpy.array([range(0,k),range(1,k+1)]).T299 A = np.array([range(0,k),range(1,k+1)]).T 300 300 _check("depth-1",A) 301 301 302 A = n umpy.array([range(1,k+1),range(0,k)]).T302 A = np.array([range(1,k+1),range(0,k)]).T 303 303 _check("depth-2",A) 304 304 -
TabularUnified src/sas/sascalc/invariant/invariant.py ¶
rb699768 r9a5097c 17 17 """ 18 18 import math 19 import numpy 19 import numpy as np 20 20 21 21 from sas.sascalc.dataloader.data_info import Data1D as LoaderData1D … … 50 50 dy = data.dy 51 51 else: 52 dy = n umpy.ones(len(data.y))52 dy = np.ones(len(data.y)) 53 53 54 54 # Transform the data … … 63 63 64 64 # Create Data1D object 65 x_out = n umpy.asarray(x_out)66 y_out = n umpy.asarray(y_out)67 dy_out = n umpy.asarray(dy_out)65 x_out = np.asarray(x_out) 66 y_out = np.asarray(y_out) 67 dy_out = np.asarray(dy_out) 68 68 linear_data = LoaderData1D(x=x_out, y=y_out, dy=dy_out) 69 69 … … 158 158 :param x: array of q-values 159 159 """ 160 p1 = n umpy.array([self.dscale * math.exp(-((self.radius * q) ** 2 / 3)) \160 p1 = np.array([self.dscale * math.exp(-((self.radius * q) ** 2 / 3)) \ 161 161 for q in x]) 162 p2 = n umpy.array([self.scale * math.exp(-((self.radius * q) ** 2 / 3))\162 p2 = np.array([self.scale * math.exp(-((self.radius * q) ** 2 / 3))\ 163 163 * (-(q ** 2 / 3)) * 2 * self.radius * self.dradius for q in x]) 164 164 diq2 = p1 * p1 + p2 * p2 165 return n umpy.array([math.sqrt(err) for err in diq2])165 return np.array([math.sqrt(err) for err in diq2]) 166 166 167 167 def _guinier(self, x): … … 182 182 msg = "Rg expected positive value, but got %s" % self.radius 183 183 raise ValueError(msg) 184 value = n umpy.array([math.exp(-((self.radius * i) ** 2 / 3)) for i in x])184 value = np.array([math.exp(-((self.radius * i) ** 2 / 3)) for i in x]) 185 185 return self.scale * value 186 186 … … 232 232 :param x: array of q-values 233 233 """ 234 p1 = n umpy.array([self.dscale * math.pow(q, -self.power) for q in x])235 p2 = n umpy.array([self.scale * self.power * math.pow(q, -self.power - 1)\234 p1 = np.array([self.dscale * math.pow(q, -self.power) for q in x]) 235 p2 = np.array([self.scale * self.power * math.pow(q, -self.power - 1)\ 236 236 * self.dpower for q in x]) 237 237 diq2 = p1 * p1 + p2 * p2 238 return n umpy.array([math.sqrt(err) for err in diq2])238 return np.array([math.sqrt(err) for err in diq2]) 239 239 240 240 def _power_law(self, x): … … 259 259 raise ValueError(msg) 260 260 261 value = n umpy.array([math.pow(i, -self.power) for i in x])261 value = np.array([math.pow(i, -self.power) for i in x]) 262 262 return self.scale * value 263 263 … … 304 304 idx = (self.data.x >= qmin) & (self.data.x <= qmax) 305 305 306 fx = n umpy.zeros(len(self.data.x))306 fx = np.zeros(len(self.data.x)) 307 307 308 308 # Uncertainty 309 if type(self.data.dy) == n umpy.ndarray and \309 if type(self.data.dy) == np.ndarray and \ 310 310 len(self.data.dy) == len(self.data.x) and \ 311 numpy.all(self.data.dy > 0):311 np.all(self.data.dy > 0): 312 312 sigma = self.data.dy 313 313 else: 314 sigma = n umpy.ones(len(self.data.x))314 sigma = np.ones(len(self.data.x)) 315 315 316 316 # Compute theory data f(x) … … 332 332 sigma2 = linearized_data.dy * linearized_data.dy 333 333 a = -(power) 334 b = (n umpy.sum(linearized_data.y / sigma2) \335 - a * n umpy.sum(linearized_data.x / sigma2)) / numpy.sum(1.0 / sigma2)334 b = (np.sum(linearized_data.y / sigma2) \ 335 - a * np.sum(linearized_data.x / sigma2)) / np.sum(1.0 / sigma2) 336 336 337 337 338 338 deltas = linearized_data.x * a + \ 339 numpy.ones(len(linearized_data.x)) * b - linearized_data.y340 residuals = n umpy.sum(deltas * deltas / sigma2)341 342 err = math.fabs(residuals) / n umpy.sum(1.0 / sigma2)339 np.ones(len(linearized_data.x)) * b - linearized_data.y 340 residuals = np.sum(deltas * deltas / sigma2) 341 342 err = math.fabs(residuals) / np.sum(1.0 / sigma2) 343 343 return [a, b], [0, math.sqrt(err)] 344 344 else: 345 A = n umpy.vstack([linearized_data.x / linearized_data.dy, 1.0 / linearized_data.dy]).T346 (p, residuals, _, _) = n umpy.linalg.lstsq(A, linearized_data.y / linearized_data.dy)345 A = np.vstack([linearized_data.x / linearized_data.dy, 1.0 / linearized_data.dy]).T 346 (p, residuals, _, _) = np.linalg.lstsq(A, linearized_data.y / linearized_data.dy) 347 347 348 348 # Get the covariance matrix, defined as inv_cov = a_transposed * a 349 err = n umpy.zeros(2)349 err = np.zeros(2) 350 350 try: 351 inv_cov = n umpy.dot(A.transpose(), A)352 cov = n umpy.linalg.pinv(inv_cov)351 inv_cov = np.dot(A.transpose(), A) 352 cov = np.linalg.pinv(inv_cov) 353 353 err_matrix = math.fabs(residuals) * cov 354 354 err = [math.sqrt(err_matrix[0][0]), math.sqrt(err_matrix[1][1])] … … 434 434 if new_data.dy is None or len(new_data.x) != len(new_data.dy) or \ 435 435 (min(new_data.dy) == 0 and max(new_data.dy) == 0): 436 new_data.dy = n umpy.ones(len(new_data.x))436 new_data.dy = np.ones(len(new_data.x)) 437 437 return new_data 438 438 … … 571 571 """ 572 572 #create new Data1D to compute the invariant 573 q = n umpy.linspace(start=q_start,573 q = np.linspace(start=q_start, 574 574 stop=q_end, 575 575 num=npts, … … 580 580 result_data = LoaderData1D(x=q, y=iq, dy=diq) 581 581 if self._smeared != None: 582 result_data.dxl = self._smeared * n umpy.ones(len(q))582 result_data.dxl = self._smeared * np.ones(len(q)) 583 583 return result_data 584 584 … … 691 691 692 692 if q_start >= q_end: 693 return n umpy.zeros(0), numpy.zeros(0)693 return np.zeros(0), np.zeros(0) 694 694 695 695 return self._get_extrapolated_data(\ … … 719 719 720 720 if q_start >= q_end: 721 return n umpy.zeros(0), numpy.zeros(0)721 return np.zeros(0), np.zeros(0) 722 722 723 723 return self._get_extrapolated_data(\ -
TabularUnified src/sas/sascalc/pr/fit/AbstractFitEngine.py ¶
rfc18690 r9a5097c 4 4 import sys 5 5 import math 6 import numpy 6 import numpy as np 7 7 8 8 from sas.sascalc.dataloader.data_info import Data1D … … 162 162 # constant, or dy data 163 163 if dy is None or dy == [] or dy.all() == 0: 164 self.dy = n umpy.ones(len(y))164 self.dy = np.ones(len(y)) 165 165 else: 166 self.dy = n umpy.asarray(dy).copy()166 self.dy = np.asarray(dy).copy() 167 167 168 168 ## Min Q-value 169 169 #Skip the Q=0 point, especially when y(q=0)=None at x[0]. 170 170 if min(self.x) == 0.0 and self.x[0] == 0 and\ 171 not n umpy.isfinite(self.y[0]):171 not np.isfinite(self.y[0]): 172 172 self.qmin = min(self.x[self.x != 0]) 173 173 else: … … 188 188 # Skip Q=0 point, (especially for y(q=0)=None at x[0]). 189 189 # ToDo: Find better way to do it. 190 if qmin == 0.0 and not n umpy.isfinite(self.y[qmin]):190 if qmin == 0.0 and not np.isfinite(self.y[qmin]): 191 191 self.qmin = min(self.x[self.x != 0]) 192 192 elif qmin != None: … … 239 239 """ 240 240 # Compute theory data f(x) 241 fx = n umpy.zeros(len(self.x))241 fx = np.zeros(len(self.x)) 242 242 fx[self.idx_unsmeared] = fn(self.x[self.idx_unsmeared]) 243 243 … … 247 247 self._last_unsmeared_bin) 248 248 ## Sanity check 249 if n umpy.size(self.dy) != numpy.size(fx):249 if np.size(self.dy) != np.size(fx): 250 250 msg = "FitData1D: invalid error array " 251 msg += "%d <> %d" % (n umpy.shape(self.dy), numpy.size(fx))251 msg += "%d <> %d" % (np.shape(self.dy), np.size(fx)) 252 252 raise RuntimeError, msg 253 253 return (self.y[self.idx] - fx[self.idx]) / self.dy[self.idx], fx[self.idx] … … 300 300 ## new error image for fitting purpose 301 301 if self.err_data == None or self.err_data == []: 302 self.res_err_data = n umpy.ones(len(self.data))302 self.res_err_data = np.ones(len(self.data)) 303 303 else: 304 304 self.res_err_data = copy.deepcopy(self.err_data) 305 305 #self.res_err_data[self.res_err_data==0]=1 306 306 307 self.radius = n umpy.sqrt(self.qx_data**2 + self.qy_data**2)307 self.radius = np.sqrt(self.qx_data**2 + self.qy_data**2) 308 308 309 309 # Note: mask = True: for MASK while mask = False for NOT to mask … … 311 311 (self.radius <= self.qmax)) 312 312 self.idx = (self.idx) & (self.mask) 313 self.idx = (self.idx) & (n umpy.isfinite(self.data))314 self.num_points = n umpy.sum(self.idx)313 self.idx = (self.idx) & (np.isfinite(self.data)) 314 self.num_points = np.sum(self.idx) 315 315 316 316 def set_smearer(self, smearer): … … 334 334 if qmax != None: 335 335 self.qmax = qmax 336 self.radius = n umpy.sqrt(self.qx_data**2 + self.qy_data**2)336 self.radius = np.sqrt(self.qx_data**2 + self.qy_data**2) 337 337 self.idx = ((self.qmin <= self.radius) &\ 338 338 (self.radius <= self.qmax)) 339 339 self.idx = (self.idx) & (self.mask) 340 self.idx = (self.idx) & (n umpy.isfinite(self.data))340 self.idx = (self.idx) & (np.isfinite(self.data)) 341 341 self.idx = (self.idx) & (self.res_err_data != 0) 342 342 … … 351 351 Number of measurement points in data set after masking, etc. 352 352 """ 353 return n umpy.sum(self.idx)353 return np.sum(self.idx) 354 354 355 355 def residuals(self, fn): -
TabularUnified src/sas/sascalc/pr/fit/BumpsFitting.py ¶
rb699768 r9a5097c 5 5 from datetime import timedelta, datetime 6 6 7 import numpy 7 import numpy as np 8 8 9 9 from bumps import fitters … … 96 96 try: 97 97 p = history.population_values[0] 98 n,p = len(p), n umpy.sort(p)98 n,p = len(p), np.sort(p) 99 99 QI,Qmid, = int(0.2*n),int(0.5*n) 100 100 self.convergence.append((best, p[0],p[QI],p[Qmid],p[-1-QI],p[-1])) … … 193 193 194 194 def numpoints(self): 195 return n umpy.sum(self.data.idx) # number of fitted points195 return np.sum(self.data.idx) # number of fitted points 196 196 197 197 def nllf(self): 198 return 0.5*n umpy.sum(self.residuals()**2)198 return 0.5*np.sum(self.residuals()**2) 199 199 200 200 def theory(self): … … 293 293 R.success = result['success'] 294 294 if R.success: 295 R.stderr = n umpy.hstack((result['stderr'][fitted_index],296 numpy.NaN*numpy.ones(len(fitness.computed_pars))))297 R.pvec = n umpy.hstack((result['value'][fitted_index],295 R.stderr = np.hstack((result['stderr'][fitted_index], 296 np.NaN*np.ones(len(fitness.computed_pars)))) 297 R.pvec = np.hstack((result['value'][fitted_index], 298 298 [p.value for p in fitness.computed_pars])) 299 R.fitness = n umpy.sum(R.residuals**2)/(fitness.numpoints() - len(fitted_index))299 R.fitness = np.sum(R.residuals**2)/(fitness.numpoints() - len(fitted_index)) 300 300 else: 301 R.stderr = n umpy.NaN*numpy.ones(len(param_list))302 R.pvec = n umpy.asarray( [p.value for p in fitness.fitted_pars+fitness.computed_pars])303 R.fitness = n umpy.NaN301 R.stderr = np.NaN*np.ones(len(param_list)) 302 R.pvec = np.asarray( [p.value for p in fitness.fitted_pars+fitness.computed_pars]) 303 R.fitness = np.NaN 304 304 R.convergence = result['convergence'] 305 305 if result['uncertainty'] is not None: … … 331 331 max_step = steps + options.get('burn', 0) 332 332 pars = [p.name for p in problem._parameters] 333 #x0 = n umpy.asarray([p.value for p in problem._parameters])333 #x0 = np.asarray([p.value for p in problem._parameters]) 334 334 options['monitors'] = [ 335 335 BumpsMonitor(handler, max_step, pars, problem.dof), … … 352 352 353 353 convergence_list = options['monitors'][-1].convergence 354 convergence = (2*n umpy.asarray(convergence_list)/problem.dof355 if convergence_list else n umpy.empty((0,1),'d'))354 convergence = (2*np.asarray(convergence_list)/problem.dof 355 if convergence_list else np.empty((0,1),'d')) 356 356 357 357 success = best is not None -
TabularUnified src/sas/sascalc/pr/fit/Loader.py ¶
rb699768 r9a5097c 2 2 #import wx 3 3 #import string 4 import numpy 4 import numpy as np 5 5 6 6 class Load: … … 52 52 self.y.append(y) 53 53 self.dy.append(dy) 54 self.dx = n umpy.zeros(len(self.x))54 self.dx = np.zeros(len(self.x)) 55 55 except: 56 56 print "READ ERROR", line -
TabularUnified src/sas/sascalc/pr/fit/expression.py ¶
rb699768 r9a5097c 271 271 272 272 def test_deps(): 273 import numpy 273 import numpy as np 274 274 275 275 # Null case … … 279 279 _check("test1",[(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)]) 280 280 _check("test1 renumbered",[(6,1),(7,3),(7,4),(6,7),(5,7),(3,2)]) 281 _check("test1 numpy",n umpy.array([(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)]))281 _check("test1 numpy",np.array([(2,7),(1,5),(1,4),(2,1),(3,1),(5,6)])) 282 282 283 283 # No dependencies … … 291 291 292 292 # large test for gross speed check 293 A = n umpy.random.randint(4000,size=(1000,2))293 A = np.random.randint(4000,size=(1000,2)) 294 294 A[:,1] += 4000 # Avoid cycles 295 295 _check("test-large",A) … … 297 297 # depth tests 298 298 k = 200 299 A = n umpy.array([range(0,k),range(1,k+1)]).T299 A = np.array([range(0,k),range(1,k+1)]).T 300 300 _check("depth-1",A) 301 301 302 A = n umpy.array([range(1,k+1),range(0,k)]).T302 A = np.array([range(1,k+1),range(0,k)]).T 303 303 _check("depth-2",A) 304 304 -
TabularUnified src/sas/sascalc/pr/invertor.py ¶
r2c60f304 r9a5097c 7 7 """ 8 8 9 import numpy 9 import numpy as np 10 10 import sys 11 11 import math … … 189 189 #import numpy 190 190 if name == 'x': 191 out = n umpy.ones(self.get_nx())191 out = np.ones(self.get_nx()) 192 192 self.get_x(out) 193 193 return out 194 194 elif name == 'y': 195 out = n umpy.ones(self.get_ny())195 out = np.ones(self.get_ny()) 196 196 self.get_y(out) 197 197 return out 198 198 elif name == 'err': 199 out = n umpy.ones(self.get_nerr())199 out = np.ones(self.get_nerr()) 200 200 self.get_err(out) 201 201 return out … … 325 325 raise RuntimeError, msg 326 326 327 p = n umpy.ones(nfunc)327 p = np.ones(nfunc) 328 328 t_0 = time.time() 329 329 out, cov_x, _, _, _ = optimize.leastsq(self.residuals, p, full_output=1) … … 341 341 342 342 if cov_x is None: 343 cov_x = n umpy.ones([nfunc, nfunc])343 cov_x = np.ones([nfunc, nfunc]) 344 344 cov_x *= math.fabs(chisqr) 345 345 return out, cov_x … … 358 358 raise RuntimeError, msg 359 359 360 p = n umpy.ones(nfunc)360 p = np.ones(nfunc) 361 361 t_0 = time.time() 362 362 out, cov_x, _, _, _ = optimize.leastsq(self.pr_residuals, p, full_output=1) … … 435 435 """ 436 436 # Note: To make sure an array is contiguous: 437 # blah = n umpy.ascontiguousarray(blah_original)437 # blah = np.ascontiguousarray(blah_original) 438 438 # ... before passing it to C 439 439 … … 456 456 nfunc += 1 457 457 458 a = n umpy.zeros([npts + nq, nfunc])459 b = n umpy.zeros(npts + nq)460 err = n umpy.zeros([nfunc, nfunc])458 a = np.zeros([npts + nq, nfunc]) 459 b = np.zeros(npts + nq) 460 err = np.zeros([nfunc, nfunc]) 461 461 462 462 # Construct the a matrix and b vector that represent the problem … … 476 476 self.chi2 = chi2 477 477 478 inv_cov = n umpy.zeros([nfunc, nfunc])478 inv_cov = np.zeros([nfunc, nfunc]) 479 479 # Get the covariance matrix, defined as inv_cov = a_transposed * a 480 480 self._get_invcov_matrix(nfunc, nr, a, inv_cov) … … 490 490 491 491 try: 492 cov = n umpy.linalg.pinv(inv_cov)492 cov = np.linalg.pinv(inv_cov) 493 493 err = math.fabs(chi2 / float(npts - nfunc)) * cov 494 494 except: … … 505 505 self.background = c[0] 506 506 507 err_0 = n umpy.zeros([nfunc, nfunc])508 c_0 = n umpy.zeros(nfunc)507 err_0 = np.zeros([nfunc, nfunc]) 508 c_0 = np.zeros(nfunc) 509 509 510 510 for i in range(nfunc_0): … … 662 662 str(self.cov[i][i]))) 663 663 file.write("<r> <Pr> <dPr>\n") 664 r = n umpy.arange(0.0, self.d_max, self.d_max / npts)664 r = np.arange(0.0, self.d_max, self.d_max / npts) 665 665 666 666 for r_i in r: … … 694 694 toks = line.split('=') 695 695 self.nfunc = int(toks[1]) 696 self.out = n umpy.zeros(self.nfunc)697 self.cov = n umpy.zeros([self.nfunc, self.nfunc])696 self.out = np.zeros(self.nfunc) 697 self.cov = np.zeros([self.nfunc, self.nfunc]) 698 698 elif line.startswith('#alpha='): 699 699 toks = line.split('=') -
TabularUnified src/sas/sascalc/pr/num_term.py ¶
rb699768 r9a5097c 1 1 import math 2 import numpy 2 import numpy as np 3 3 import copy 4 4 import sys … … 152 152 def load(path): 153 153 # Read the data from the data file 154 data_x = n umpy.zeros(0)155 data_y = n umpy.zeros(0)156 data_err = n umpy.zeros(0)154 data_x = np.zeros(0) 155 data_y = np.zeros(0) 156 data_err = np.zeros(0) 157 157 scale = None 158 158 min_err = 0.0 … … 176 176 #err = 0 177 177 178 data_x = n umpy.append(data_x, test_x)179 data_y = n umpy.append(data_y, test_y)180 data_err = n umpy.append(data_err, err)178 data_x = np.append(data_x, test_x) 179 data_y = np.append(data_y, test_y) 180 data_err = np.append(data_err, err) 181 181 except: 182 182 logging.error(sys.exc_value) -
TabularUnified src/sas/sasgui/guiframe/aboutbox.py ¶
r49e000b r1779e72 118 118 self.bitmap_button_ansto = wx.BitmapButton(self, -1, wx.NullBitmap) 119 119 self.bitmap_button_tudelft = wx.BitmapButton(self, -1, wx.NullBitmap) 120 self.bitmap_button_dls = wx.BitmapButton(self, -1, wx.NullBitmap) 120 121 121 122 self.static_line_3 = wx.StaticLine(self, -1) … … 137 138 self.Bind(wx.EVT_BUTTON, self.onAnstoLogo, self.bitmap_button_ansto) 138 139 self.Bind(wx.EVT_BUTTON, self.onTudelftLogo, self.bitmap_button_tudelft) 140 self.Bind(wx.EVT_BUTTON, self.onDlsLogo, self.bitmap_button_dls) 139 141 # end wxGlade 140 142 # fill in acknowledgements … … 229 231 logo = wx.Bitmap(image) 230 232 self.bitmap_button_tudelft.SetBitmapLabel(logo) 233 234 image = file_dir + "/images/dls_logo.png" 235 if os.path.isfile(config._dls_logo): 236 image = config._dls_logo 237 logo = wx.Bitmap(image) 238 self.bitmap_button_dls.SetBitmapLabel(logo) 231 239 232 240 # resize dialog window to fit version number nicely … … 260 268 self.bitmap_button_ansto.SetSize(self.bitmap_button_ansto.GetBestSize()) 261 269 self.bitmap_button_tudelft.SetSize(self.bitmap_button_tudelft.GetBestSize()) 270 self.bitmap_button_dls.SetSize(self.bitmap_button_dls.GetBestSize()) 262 271 # end wxGlade 263 272 … … 325 334 sizer_logos.Add(self.bitmap_button_tudelft, 0, 326 335 wx.LEFT|wx.ADJUST_MINSIZE, 2) 336 sizer_logos.Add(self.bitmap_button_dls, 0, 337 wx.LEFT|wx.ADJUST_MINSIZE, 2) 327 338 328 339 sizer_logos.Add((10, 50), 0, wx.ADJUST_MINSIZE, 0) … … 423 434 event.Skip() 424 435 436 def onDlsLogo(self, event): 437 """ 438 """ 439 # wxGlade: DialogAbout.<event_handler> 440 launchBrowser(config._dls_url) 441 event.Skip() 442 425 443 # end of class DialogAbout 426 444 -
TabularUnified src/sas/sasgui/guiframe/acknowledgebox.py ¶
rc1fdf84 r74c8cd0 11 11 import wx.richtext 12 12 import wx.lib.hyperlink 13 from wx.lib.expando import ExpandoTextCtrl 13 14 import random 14 15 import os.path … … 36 37 Shows the current method for acknowledging SasView in 37 38 scholarly publications. 38 39 39 """ 40 40 … … 44 44 wx.Dialog.__init__(self, *args, **kwds) 45 45 46 self.ack = wx.TextCtrl(self, style=wx.TE_LEFT|wx.TE_MULTILINE|wx.TE_BESTWRAP|wx.TE_READONLY|wx.TE_NO_VSCROLL)46 self.ack = ExpandoTextCtrl(self, style=wx.TE_LEFT|wx.TE_MULTILINE|wx.TE_BESTWRAP|wx.TE_READONLY|wx.TE_NO_VSCROLL) 47 47 self.ack.SetValue(config._acknowledgement_publications) 48 self.ack.SetMinSize((-1, 55)) 48 #self.ack.SetMinSize((-1, 55)) 49 self.citation = ExpandoTextCtrl(self, style=wx.TE_LEFT|wx.TE_MULTILINE|wx.TE_BESTWRAP|wx.TE_READONLY|wx.TE_NO_VSCROLL) 50 self.citation.SetValue(config._acknowledgement_citation) 49 51 self.preamble = wx.StaticText(self, -1, config._acknowledgement_preamble) 50 52 items = [config._acknowledgement_preamble_bullet1, … … 52 54 config._acknowledgement_preamble_bullet3, 53 55 config._acknowledgement_preamble_bullet4] 54 self.list1 = wx.StaticText(self, -1, " \t(1) " + items[0])55 self.list2 = wx.StaticText(self, -1, " \t(2) " + items[1])56 self.list3 = wx.StaticText(self, -1, " \t(3) " + items[2])57 self.list4 = wx.StaticText(self, -1, " \t(4) " + items[3])56 self.list1 = wx.StaticText(self, -1, "(1) " + items[0]) 57 self.list2 = wx.StaticText(self, -1, "(2) " + items[1]) 58 self.list3 = wx.StaticText(self, -1, "(3) " + items[2]) 59 self.list4 = wx.StaticText(self, -1, "(4) " + items[3]) 58 60 self.static_line = wx.StaticLine(self, 0) 59 61 self.__set_properties() … … 69 71 self.SetTitle("Acknowledging SasView") 70 72 #Increased size of box from (525, 225), SMK, 04/10/16 71 self.Set Size((600, 300))73 self.SetClientSize((600, 320)) 72 74 # end wxGlade 73 75 … … 81 83 sizer_titles.Add(self.preamble, 0, wx.ALL|wx.EXPAND, 5) 82 84 sizer_titles.Add(self.list1, 0, wx.ALL|wx.EXPAND, 5) 85 sizer_titles.Add(self.ack, 0, wx.ALL|wx.EXPAND, 5) 83 86 sizer_titles.Add(self.list2, 0, wx.ALL|wx.EXPAND, 5) 87 sizer_titles.Add(self.citation, 0, wx.ALL|wx.EXPAND, 5) 84 88 sizer_titles.Add(self.list3, 0, wx.ALL|wx.EXPAND, 5) 89 #sizer_titles.Add(self.static_line, 0, wx.ALL|wx.EXPAND, 0) 85 90 sizer_titles.Add(self.list4, 0, wx.ALL|wx.EXPAND, 5) 86 sizer_titles.Add(self.static_line, 0, wx.ALL|wx.EXPAND, 0)87 sizer_titles.Add(self.ack, 0, wx.ALL|wx.EXPAND, 5)88 91 sizer_main.Add(sizer_titles, -1, wx.ALL|wx.EXPAND, 5) 89 92 self.SetAutoLayout(True) … … 91 94 self.Layout() 92 95 self.Centre() 96 #self.SetClientSize(sizer_main.GetSize()) 93 97 # end wxGlade 94 98 -
TabularUnified src/sas/sasgui/guiframe/config.py ¶
rd85c194 r1779e72 1 1 """ 2 Application settings2 Application settings 3 3 """ 4 import time 4 5 import os 5 import time6 6 from sas.sasgui.guiframe.gui_style import GUIFRAME 7 import sas.sasview 8 import logging 9 7 10 # Version of the application 8 __appname__ = " DummyView"9 __version__ = '0.0.0'10 __build__ = '1'11 __appname__ = "SasView" 12 __version__ = sas.sasview.__version__ 13 __build__ = sas.sasview.__build__ 11 14 __download_page__ = 'https://github.com/SasView/sasview/releases' 12 15 __update_URL__ = 'http://www.sasview.org/latestversion.json' 13 16 14 15 17 # Debug message flag 16 __EVT_DEBUG__ = True18 __EVT_DEBUG__ = False 17 19 18 20 # Flag for automated testing … … 29 31 _acknowledgement_preamble =\ 30 32 '''To ensure the long term support and development of this software please''' +\ 31 ''' remember to do the following.'''33 ''' remember to:''' 32 34 _acknowledgement_preamble_bullet1 =\ 33 '''Acknowledge its use in your publications as suggested below'''35 '''Acknowledge its use in your publications as :''' 34 36 _acknowledgement_preamble_bullet2 =\ 35 '''Reference the following website: http://www.sasview.org'''37 '''Reference SasView as:''' 36 38 _acknowledgement_preamble_bullet3 =\ 37 39 '''Reference the model you used if appropriate (see documentation for refs)''' … … 39 41 '''Send us your reference for our records: developers@sasview.org''' 40 42 _acknowledgement_publications = \ 41 '''This work benefited from the use of the SasView application, originally 42 developed under NSF award DMR-0520547. 43 '''This work benefited from the use of the SasView application, originally developed under NSF Award DMR-0520547. SasView also contains code developed with funding from the EU Horizon 2020 programme under the SINE2020 project Grant No 654000.''' 44 _acknowledgement_citation = \ 45 '''M. Doucet et al. SasView Version 4.1, Zenodo, 10.5281/zenodo.438138''' 46 47 _acknowledgement = \ 48 '''This work was originally developed as part of the DANSE project funded by the US NSF under Award DMR-0520547,\n but is currently maintained by a collaboration between UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO, TU Delft, DLS, and the scattering community.\n\n SasView also contains code developed with funding from the EU Horizon 2020 programme under the SINE2020 project (Grant No 654000).\nA list of individual contributors can be found at: https://github.com/orgs/SasView/people 43 49 ''' 44 _acknowledgement = \45 '''This work originally developed as part of the DANSE project funded by the NSF46 under grant DMR-0520547, and currently maintained by NIST, UMD, ORNL, ISIS, ESS47 and ILL.48 50 49 '''50 51 _homepage = "http://www.sasview.org" 51 _download = "http://sourceforge.net/projects/sasview/files/"52 _download = __download_page__ 52 53 _authors = [] 53 54 _paper = "http://sourceforge.net/p/sasview/tickets/" 54 55 _license = "mailto:help@sasview.org" 55 _nsf_logo = "images/nsf_logo.png" 56 _danse_logo = "images/danse_logo.png" 57 _inst_logo = "images/utlogo.gif" 58 _nist_logo = "images/nist_logo.png" 59 _umd_logo = "images/umd_logo.png" 60 _sns_logo = "images/sns_logo.png" 61 _isis_logo = "images/isis_logo.png" 62 _ess_logo = "images/ess_logo.png" 63 _ill_logo = "images/ill_logo.png" 56 57 58 icon_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "images")) 59 logging.info("icon path: %s" % icon_path) 60 media_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "media")) 61 test_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "test")) 62 63 _nist_logo = os.path.join(icon_path, "nist_logo.png") 64 _umd_logo = os.path.join(icon_path, "umd_logo.png") 65 _sns_logo = os.path.join(icon_path, "sns_logo.png") 66 _ornl_logo = os.path.join(icon_path, "ornl_logo.png") 67 _isis_logo = os.path.join(icon_path, "isis_logo.png") 68 _ess_logo = os.path.join(icon_path, "ess_logo.png") 69 _ill_logo = os.path.join(icon_path, "ill_logo.png") 70 _ansto_logo = os.path.join(icon_path, "ansto_logo.png") 71 _tudelft_logo = os.path.join(icon_path, "tudelft_logo.png") 72 _nsf_logo = os.path.join(icon_path, "nsf_logo.png") 73 _danse_logo = os.path.join(icon_path, "danse_logo.png") 74 _inst_logo = os.path.join(icon_path, "utlogo.gif") 64 75 _nist_url = "http://www.nist.gov/" 65 76 _umd_url = "http://www.umd.edu/" 66 77 _sns_url = "http://neutrons.ornl.gov/" 78 _ornl_url = "http://neutrons.ornl.gov/" 67 79 _nsf_url = "http://www.nsf.gov" 68 _danse_url = "http://www.cacr.caltech.edu/projects/danse/release/index.html"69 _inst_url = "http://www.utk.edu"70 80 _isis_url = "http://www.isis.stfc.ac.uk/" 71 81 _ess_url = "http://ess-scandinavia.eu/" 72 82 _ill_url = "http://www.ill.eu/" 73 _corner_image = "images/angles_flat.png" 74 _welcome_image = "images/SVwelcome.png" 75 _copyright = "(c) 2008, University of Tennessee" 76 #edit the lists below of file state your plugin can read 77 #for sasview this how you can edit these lists 78 #PLUGIN_STATE_EXTENSIONS = ['.prv','.fitv', '.inv'] 79 #APPLICATION_STATE_EXTENSION = '.svs' 80 #PLUGINS_WLIST = ['P(r) files (*.prv)|*.prv', 81 # 'Fitting files (*.fitv)|*.fitv', 82 # 'Invariant files (*.inv)|*.inv'] 83 #APPLICATION_WLIST = 'SasView files (*.svs)|*.svs' 84 APPLICATION_WLIST = '' 85 APPLICATION_STATE_EXTENSION = None 86 PLUGINS_WLIST = [] 87 PLUGIN_STATE_EXTENSIONS = [] 88 SPLASH_SCREEN_PATH = "images/danse_logo.png" 89 DEFAULT_STYLE = GUIFRAME.SINGLE_APPLICATION 90 SPLASH_SCREEN_WIDTH = 500 91 SPLASH_SCREEN_HEIGHT = 300 92 WELCOME_PANEL_ON = False 93 TUTORIAL_PATH = None 94 SS_MAX_DISPLAY_TIME = 1500 95 PLOPANEL_WIDTH = 350 96 PLOPANEL_HEIGTH = 350 97 GUIFRAME_WIDTH = 1000 98 GUIFRAME_HEIGHT = 800 99 CONTROL_WIDTH = -1 100 CONTROL_HEIGHT = -1 101 SetupIconFile_win = os.path.join("images", "ball.ico") 102 SetupIconFile_mac = os.path.join("images", "ball.icns") 103 DefaultGroupName = "DANSE" 104 OutputBaseFilename = "setupGuiFrame" 83 _ansto_url = "http://www.ansto.gov.au/" 84 _tudelft_url = "http://www.tnw.tudelft.nl/en/cooperation/facilities/reactor-instituut-delft/" 85 _dls_url = "http://www.diamond.ac.uk/" 86 _danse_url = "http://www.cacr.caltech.edu/projects/danse/release/index.html" 87 _inst_url = "http://www.utk.edu" 88 _corner_image = os.path.join(icon_path, "angles_flat.png") 89 _welcome_image = os.path.join(icon_path, "SVwelcome.png") 90 _copyright = "(c) 2009 - 2017, UTK, UMD, NIST, ORNL, ISIS, ESS, ILL, ANSTO, TU Delft, and DLS" 91 marketplace_url = "http://marketplace.sasview.org/" 92 93 #edit the list of file state your plugin can read 94 APPLICATION_WLIST = 'SasView files (*.svs)|*.svs' 95 APPLICATION_STATE_EXTENSION = '.svs' 96 GUIFRAME_WIDTH = 1150 97 GUIFRAME_HEIGHT = 840 98 PLUGIN_STATE_EXTENSIONS = ['.fitv', '.inv', '.prv', '.crf'] 99 PLUGINS_WLIST = ['Fitting files (*.fitv)|*.fitv', 100 'Invariant files (*.inv)|*.inv', 101 'P(r) files (*.prv)|*.prv', 102 'Corfunc files (*.crf)|*.crf'] 103 PLOPANEL_WIDTH = 415 104 PLOPANEL_HEIGTH = 370 105 105 DATAPANEL_WIDTH = 235 106 106 DATAPANEL_HEIGHT = 700 107 SPLASH_SCREEN_PATH = os.path.join(icon_path, "SVwelcome_mini.png") 108 TUTORIAL_PATH = os.path.join(media_path, "Tutorial.pdf") 109 DEFAULT_STYLE = GUIFRAME.MULTIPLE_APPLICATIONS|GUIFRAME.MANAGER_ON\ 110 |GUIFRAME.CALCULATOR_ON|GUIFRAME.TOOLBAR_ON 111 SPLASH_SCREEN_WIDTH = 512 112 SPLASH_SCREEN_HEIGHT = 366 113 SS_MAX_DISPLAY_TIME = 2000 114 WELCOME_PANEL_ON = True 115 WELCOME_PANEL_SHOW = False 116 CLEANUP_PLOT = False 117 # OPEN and SAVE project menu 118 OPEN_SAVE_PROJECT_MENU = True 119 #VIEW MENU 120 VIEW_MENU = True 121 #EDIT MENU 122 EDIT_MENU = True 123 124 SetupIconFile_win = os.path.join(icon_path, "ball.ico") 125 SetupIconFile_mac = os.path.join(icon_path, "ball.icns") 126 DefaultGroupName = "." 127 OutputBaseFilename = "setupSasView" 128 107 129 FIXED_PANEL = True 108 130 DATALOADER_SHOW = True … … 113 135 # set a default perspective 114 136 DEFAULT_PERSPECTIVE = 'None' 115 # OPEN and SAVE project menu 116 OPEN_SAVE_PROJECT_MENU = True 117 CLEANUP_PLOT = False 118 # OPEN and SAVE project menu 119 OPEN_SAVE_PROJECT_MENU = False 120 #VIEW MENU 121 VIEW_MENU = False 122 #EDIT MENU 123 EDIT_MENU = False 124 import wx.lib.newevent 125 (StatusBarEvent, EVT_STATUS) = wx.lib.newevent.NewEvent() 137 138 # Time out for updating sasview 139 UPDATE_TIMEOUT = 2 140 141 #OpenCL option 142 SAS_OPENCL = None 126 143 127 144 def printEVT(message): 128 """129 :TODO - need method documentation130 """131 145 if __EVT_DEBUG__: 146 """ 147 :TODO - Need method doc string 148 """ 132 149 print "%g: %s" % (time.clock(), message) 133 150 134 151 if __EVT_DEBUG_2_FILE__: 135 152 out = open(__EVT_DEBUG_FILENAME__, 'a') 136 153 out.write("%10g: %s\n" % (time.clock(), message)) 137 154 out.close() 138 -
TabularUnified src/sas/sasgui/guiframe/dataFitting.py ¶
r68adf86 r9a5097c 3 3 """ 4 4 import copy 5 import numpy 5 import numpy as np 6 6 import math 7 7 from sas.sascalc.data_util.uncertainty import Uncertainty … … 81 81 result.dxw = None 82 82 else: 83 result.dxw = n umpy.zeros(len(self.x))83 result.dxw = np.zeros(len(self.x)) 84 84 if self.dxl == None: 85 85 result.dxl = None 86 86 else: 87 result.dxl = n umpy.zeros(len(self.x))87 result.dxl = np.zeros(len(self.x)) 88 88 89 89 for i in range(len(self.x)): … … 128 128 result.dlam = None 129 129 else: 130 result.dlam = n umpy.zeros(tot_length)130 result.dlam = np.zeros(tot_length) 131 131 if self.dy == None or other.dy is None: 132 132 result.dy = None 133 133 else: 134 result.dy = n umpy.zeros(tot_length)134 result.dy = np.zeros(tot_length) 135 135 if self.dx == None or other.dx is None: 136 136 result.dx = None 137 137 else: 138 result.dx = n umpy.zeros(tot_length)138 result.dx = np.zeros(tot_length) 139 139 if self.dxw == None or other.dxw is None: 140 140 result.dxw = None 141 141 else: 142 result.dxw = n umpy.zeros(tot_length)142 result.dxw = np.zeros(tot_length) 143 143 if self.dxl == None or other.dxl is None: 144 144 result.dxl = None 145 145 else: 146 result.dxl = n umpy.zeros(tot_length)147 148 result.x = n umpy.append(self.x, other.x)146 result.dxl = np.zeros(tot_length) 147 148 result.x = np.append(self.x, other.x) 149 149 #argsorting 150 ind = n umpy.argsort(result.x)150 ind = np.argsort(result.x) 151 151 result.x = result.x[ind] 152 result.y = n umpy.append(self.y, other.y)152 result.y = np.append(self.y, other.y) 153 153 result.y = result.y[ind] 154 result.lam = n umpy.append(self.lam, other.lam)154 result.lam = np.append(self.lam, other.lam) 155 155 result.lam = result.lam[ind] 156 156 if result.dlam != None: 157 result.dlam = n umpy.append(self.dlam, other.dlam)157 result.dlam = np.append(self.dlam, other.dlam) 158 158 result.dlam = result.dlam[ind] 159 159 if result.dy != None: 160 result.dy = n umpy.append(self.dy, other.dy)160 result.dy = np.append(self.dy, other.dy) 161 161 result.dy = result.dy[ind] 162 162 if result.dx is not None: 163 result.dx = n umpy.append(self.dx, other.dx)163 result.dx = np.append(self.dx, other.dx) 164 164 result.dx = result.dx[ind] 165 165 if result.dxw is not None: 166 result.dxw = n umpy.append(self.dxw, other.dxw)166 result.dxw = np.append(self.dxw, other.dxw) 167 167 result.dxw = result.dxw[ind] 168 168 if result.dxl is not None: 169 result.dxl = n umpy.append(self.dxl, other.dxl)169 result.dxl = np.append(self.dxl, other.dxl) 170 170 result.dxl = result.dxl[ind] 171 171 return result … … 230 230 result.dxw = None 231 231 else: 232 result.dxw = n umpy.zeros(len(self.x))232 result.dxw = np.zeros(len(self.x)) 233 233 if self.dxl == None: 234 234 result.dxl = None 235 235 else: 236 result.dxl = n umpy.zeros(len(self.x))237 238 for i in range(n umpy.size(self.x)):236 result.dxl = np.zeros(len(self.x)) 237 238 for i in range(np.size(self.x)): 239 239 result.x[i] = self.x[i] 240 240 if self.dx is not None and len(self.x) == len(self.dx): … … 282 282 result.dlam = None 283 283 else: 284 result.dlam = n umpy.zeros(tot_length)284 result.dlam = np.zeros(tot_length) 285 285 if self.dy == None or other.dy is None: 286 286 result.dy = None 287 287 else: 288 result.dy = n umpy.zeros(tot_length)288 result.dy = np.zeros(tot_length) 289 289 if self.dx == None or other.dx is None: 290 290 result.dx = None 291 291 else: 292 result.dx = n umpy.zeros(tot_length)292 result.dx = np.zeros(tot_length) 293 293 if self.dxw == None or other.dxw is None: 294 294 result.dxw = None 295 295 else: 296 result.dxw = n umpy.zeros(tot_length)296 result.dxw = np.zeros(tot_length) 297 297 if self.dxl == None or other.dxl is None: 298 298 result.dxl = None 299 299 else: 300 result.dxl = n umpy.zeros(tot_length)301 result.x = n umpy.append(self.x, other.x)300 result.dxl = np.zeros(tot_length) 301 result.x = np.append(self.x, other.x) 302 302 #argsorting 303 ind = n umpy.argsort(result.x)303 ind = np.argsort(result.x) 304 304 result.x = result.x[ind] 305 result.y = n umpy.append(self.y, other.y)305 result.y = np.append(self.y, other.y) 306 306 result.y = result.y[ind] 307 result.lam = n umpy.append(self.lam, other.lam)307 result.lam = np.append(self.lam, other.lam) 308 308 result.lam = result.lam[ind] 309 309 if result.dy != None: 310 result.dy = n umpy.append(self.dy, other.dy)310 result.dy = np.append(self.dy, other.dy) 311 311 result.dy = result.dy[ind] 312 312 if result.dx is not None: 313 result.dx = n umpy.append(self.dx, other.dx)313 result.dx = np.append(self.dx, other.dx) 314 314 result.dx = result.dx[ind] 315 315 if result.dxw is not None: 316 result.dxw = n umpy.append(self.dxw, other.dxw)316 result.dxw = np.append(self.dxw, other.dxw) 317 317 result.dxw = result.dxw[ind] 318 318 if result.dxl is not None: 319 result.dxl = n umpy.append(self.dxl, other.dxl)319 result.dxl = np.append(self.dxl, other.dxl) 320 320 result.dxl = result.dxl[ind] 321 321 return result … … 409 409 result.dqy_data = None 410 410 else: 411 result.dqx_data = n umpy.zeros(len(self.data))412 result.dqy_data = n umpy.zeros(len(self.data))413 for i in range(n umpy.size(self.data)):411 result.dqx_data = np.zeros(len(self.data)) 412 result.dqy_data = np.zeros(len(self.data)) 413 for i in range(np.size(self.data)): 414 414 result.data[i] = self.data[i] 415 415 if self.err_data is not None and \ 416 numpy.size(self.data) == numpy.size(self.err_data):416 np.size(self.data) == np.size(self.err_data): 417 417 result.err_data[i] = self.err_data[i] 418 418 if self.dqx_data is not None: … … 473 473 result.dqy_data = None 474 474 else: 475 result.dqx_data = n umpy.zeros(len(self.data) + \476 numpy.size(other.data))477 result.dqy_data = n umpy.zeros(len(self.data) + \478 numpy.size(other.data))479 480 result.data = n umpy.append(self.data, other.data)481 result.qx_data = n umpy.append(self.qx_data, other.qx_data)482 result.qy_data = n umpy.append(self.qy_data, other.qy_data)483 result.q_data = n umpy.append(self.q_data, other.q_data)484 result.mask = n umpy.append(self.mask, other.mask)475 result.dqx_data = np.zeros(len(self.data) + \ 476 np.size(other.data)) 477 result.dqy_data = np.zeros(len(self.data) + \ 478 np.size(other.data)) 479 480 result.data = np.append(self.data, other.data) 481 result.qx_data = np.append(self.qx_data, other.qx_data) 482 result.qy_data = np.append(self.qy_data, other.qy_data) 483 result.q_data = np.append(self.q_data, other.q_data) 484 result.mask = np.append(self.mask, other.mask) 485 485 if result.err_data is not None: 486 result.err_data = n umpy.append(self.err_data, other.err_data)486 result.err_data = np.append(self.err_data, other.err_data) 487 487 if self.dqx_data is not None: 488 result.dqx_data = n umpy.append(self.dqx_data, other.dqx_data)488 result.dqx_data = np.append(self.dqx_data, other.dqx_data) 489 489 if self.dqy_data is not None: 490 result.dqy_data = n umpy.append(self.dqy_data, other.dqy_data)490 result.dqy_data = np.append(self.dqy_data, other.dqy_data) 491 491 492 492 return result -
TabularUnified src/sas/sasgui/guiframe/data_processor.py ¶
r468c253 r9a5097c 1091 1091 # When inputs are from an external file 1092 1092 return inputs, outputs 1093 inds = n umpy.lexsort((to_be_sort, to_be_sort))1093 inds = np.lexsort((to_be_sort, to_be_sort)) 1094 1094 for key in outputs.keys(): 1095 1095 key_list = outputs[key] … … 1379 1379 return 1380 1380 if dy == None: 1381 dy = n umpy.zeros(len(y))1381 dy = np.zeros(len(y)) 1382 1382 #plotting 1383 1383 new_plot = Data1D(x=x, y=y, dy=dy) -
TabularUnified src/sas/sasgui/guiframe/local_perspectives/plotting/Plotter1D.py ¶
r29e872e r9a5097c 14 14 import sys 15 15 import math 16 import numpy 16 import numpy as np 17 17 import logging 18 18 from sas.sasgui.plottools.PlotPanel import PlotPanel … … 288 288 :Param value: float 289 289 """ 290 idx = (n umpy.abs(array - value)).argmin()290 idx = (np.abs(array - value)).argmin() 291 291 return int(idx) # array.flat[idx] 292 292 -
TabularUnified src/sas/sasgui/guiframe/local_perspectives/plotting/Plotter2D.py ¶
rb2b36932 r9a5097c 14 14 import sys 15 15 import math 16 import numpy 16 import numpy as np 17 17 import logging 18 18 from sas.sasgui.plottools.PlotPanel import PlotPanel … … 567 567 """ 568 568 # Find the best number of bins 569 npt = math.sqrt(len(self.data2D.data[n umpy.isfinite(self.data2D.data)]))569 npt = math.sqrt(len(self.data2D.data[np.isfinite(self.data2D.data)])) 570 570 npt = math.floor(npt) 571 571 from sas.sascalc.dataloader.manipulations import CircularAverage -
TabularUnified src/sas/sasgui/guiframe/local_perspectives/plotting/boxSlicer.py ¶
rd85c194 r9a5097c 1 1 import wx 2 2 import math 3 import numpy 3 import numpy as np 4 4 from sas.sasgui.guiframe.events import NewPlotEvent 5 5 from sas.sasgui.guiframe.events import StatusEvent … … 358 358 # # Reset x, y- coordinates if send as parameters 359 359 if x != None: 360 self.x = n umpy.sign(self.x) * math.fabs(x)360 self.x = np.sign(self.x) * math.fabs(x) 361 361 if y != None: 362 self.y = n umpy.sign(self.y) * math.fabs(y)362 self.y = np.sign(self.y) * math.fabs(y) 363 363 # # Draw lines and markers 364 364 self.inner_marker.set(xdata=[0], ydata=[self.y]) … … 465 465 # # reset x, y -coordinates if given as parameters 466 466 if x != None: 467 self.x = n umpy.sign(self.x) * math.fabs(x)467 self.x = np.sign(self.x) * math.fabs(x) 468 468 if y != None: 469 self.y = n umpy.sign(self.y) * math.fabs(y)469 self.y = np.sign(self.y) * math.fabs(y) 470 470 # # draw lines and markers 471 471 self.inner_marker.set(xdata=[self.x], ydata=[0]) -
TabularUnified src/sas/sasgui/guiframe/local_perspectives/plotting/masking.py ¶
rd85c194 r9a5097c 24 24 import math 25 25 import copy 26 import numpy 26 import numpy as np 27 27 from sas.sasgui.plottools.PlotPanel import PlotPanel 28 28 from sas.sasgui.plottools.plottables import Graph … … 298 298 self.subplot.set_ylim(self.data.ymin, self.data.ymax) 299 299 self.subplot.set_xlim(self.data.xmin, self.data.xmax) 300 mask = n umpy.ones(len(self.data.mask), dtype=bool)300 mask = np.ones(len(self.data.mask), dtype=bool) 301 301 self.data.mask = mask 302 302 # update mask plot … … 343 343 self.mask = mask 344 344 # make temperary data to plot 345 temp_mask = n umpy.zeros(len(mask))345 temp_mask = np.zeros(len(mask)) 346 346 temp_data = copy.deepcopy(self.data) 347 347 # temp_data default is None -
TabularUnified src/sas/sasgui/perspectives/calculator/data_operator.py ¶
r61780e3 r9a5097c 5 5 import sys 6 6 import time 7 import numpy 7 import numpy as np 8 8 from sas.sascalc.dataloader.data_info import Data1D 9 9 from sas.sasgui.plottools.PlotPanel import PlotPanel … … 541 541 theory, _ = theory_list.values()[0] 542 542 dnames.append(theory.name) 543 ind = n umpy.argsort(dnames)543 ind = np.argsort(dnames) 544 544 if len(ind) > 0: 545 val_list = n umpy.array(self._data.values())[ind]545 val_list = np.array(self._data.values())[ind] 546 546 for datastate in val_list: 547 547 data = datastate.data -
TabularUnified src/sas/sasgui/perspectives/calculator/gen_scatter_panel.py ¶
r0f7c930 r9a5097c 7 7 import sys 8 8 import os 9 import numpy 9 import numpy as np 10 10 #import math 11 11 import wx.aui as aui … … 741 741 marker = 'o' 742 742 m_size = 3.5 743 sld_tot = (n umpy.fabs(sld_mx) + numpy.fabs(sld_my) + \744 n umpy.fabs(sld_mz) + numpy.fabs(output.sld_n))743 sld_tot = (np.fabs(sld_mx) + np.fabs(sld_my) + \ 744 np.fabs(sld_mz) + np.fabs(output.sld_n)) 745 745 is_nonzero = sld_tot > 0.0 746 746 is_zero = sld_tot == 0.0 … … 757 757 pix_symbol = output.pix_symbol[is_nonzero] 758 758 # II. Plot selective points in color 759 other_color = n umpy.ones(len(pix_symbol), dtype='bool')759 other_color = np.ones(len(pix_symbol), dtype='bool') 760 760 for key in color_dic.keys(): 761 761 chosen_color = pix_symbol == key 762 if n umpy.any(chosen_color):762 if np.any(chosen_color): 763 763 other_color = other_color & (chosen_color != True) 764 764 color = color_dic[key] … … 767 767 markeredgecolor=color, markersize=m_size, label=key) 768 768 # III. Plot All others 769 if n umpy.any(other_color):769 if np.any(other_color): 770 770 a_name = '' 771 771 if output.pix_type == 'atom': … … 795 795 draw magnetic vectors w/arrow 796 796 """ 797 max_mx = max(n umpy.fabs(sld_mx))798 max_my = max(n umpy.fabs(sld_my))799 max_mz = max(n umpy.fabs(sld_mz))797 max_mx = max(np.fabs(sld_mx)) 798 max_my = max(np.fabs(sld_my)) 799 max_mz = max(np.fabs(sld_mz)) 800 800 max_m = max(max_mx, max_my, max_mz) 801 801 try: … … 812 812 unit_z2 = sld_mz / max_m 813 813 # 0.8 is for avoiding the color becomes white=(1,1,1)) 814 color_x = n umpy.fabs(unit_x2 * 0.8)815 color_y = n umpy.fabs(unit_y2 * 0.8)816 color_z = n umpy.fabs(unit_z2 * 0.8)814 color_x = np.fabs(unit_x2 * 0.8) 815 color_y = np.fabs(unit_y2 * 0.8) 816 color_z = np.fabs(unit_z2 * 0.8) 817 817 x2 = pos_x + unit_x2 * max_step 818 818 y2 = pos_y + unit_y2 * max_step 819 819 z2 = pos_z + unit_z2 * max_step 820 x_arrow = n umpy.column_stack((pos_x, x2))821 y_arrow = n umpy.column_stack((pos_y, y2))822 z_arrow = n umpy.column_stack((pos_z, z2))823 colors = n umpy.column_stack((color_x, color_y, color_z))820 x_arrow = np.column_stack((pos_x, x2)) 821 y_arrow = np.column_stack((pos_y, y2)) 822 z_arrow = np.column_stack((pos_z, z2)) 823 colors = np.column_stack((color_x, color_y, color_z)) 824 824 arrows = Arrow3D(panel, x_arrow, z_arrow, y_arrow, 825 825 colors, mutation_scale=10, lw=1, … … 880 880 if self.is_avg or self.is_avg == None: 881 881 self._create_default_1d_data() 882 i_out = n umpy.zeros(len(self.data.y))882 i_out = np.zeros(len(self.data.y)) 883 883 inputs = [self.data.x, [], i_out] 884 884 else: 885 885 self._create_default_2d_data() 886 i_out = n umpy.zeros(len(self.data.data))886 i_out = np.zeros(len(self.data.data)) 887 887 inputs = [self.data.qx_data, self.data.qy_data, i_out] 888 888 … … 989 989 :Param input: input list [qx_data, qy_data, i_out] 990 990 """ 991 out = n umpy.empty(0)991 out = np.empty(0) 992 992 #s = time.time() 993 993 for ind in range(len(input[0])): … … 998 998 inputi = [input[0][ind:ind + 1], [], input[2][ind:ind + 1]] 999 999 outi = self.model.run(inputi) 1000 out = n umpy.append(out, outi)1000 out = np.append(out, outi) 1001 1001 else: 1002 1002 if ind % 50 == 0 and update != None: … … 1006 1006 input[2][ind:ind + 1]] 1007 1007 outi = self.model.runXY(inputi) 1008 out = n umpy.append(out, outi)1008 out = np.append(out, outi) 1009 1009 #print time.time() - s 1010 1010 if self.is_avg or self.is_avg == None: … … 1027 1027 self.npts_x = int(float(self.npt_ctl.GetValue())) 1028 1028 self.data = Data2D() 1029 qmax = self.qmax_x #/ n umpy.sqrt(2)1029 qmax = self.qmax_x #/ np.sqrt(2) 1030 1030 self.data.xaxis('\\rm{Q_{x}}', '\AA^{-1}') 1031 1031 self.data.yaxis('\\rm{Q_{y}}', '\AA^{-1}') … … 1048 1048 qstep = self.npts_x 1049 1049 1050 x = n umpy.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True)1051 y = n umpy.linspace(start=ymin, stop=ymax, num=qstep, endpoint=True)1050 x = np.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True) 1051 y = np.linspace(start=ymin, stop=ymax, num=qstep, endpoint=True) 1052 1052 ## use data info instead 1053 new_x = n umpy.tile(x, (len(y), 1))1054 new_y = n umpy.tile(y, (len(x), 1))1053 new_x = np.tile(x, (len(y), 1)) 1054 new_y = np.tile(y, (len(x), 1)) 1055 1055 new_y = new_y.swapaxes(0, 1) 1056 1056 # all data reuire now in 1d array 1057 1057 qx_data = new_x.flatten() 1058 1058 qy_data = new_y.flatten() 1059 q_data = n umpy.sqrt(qx_data * qx_data + qy_data * qy_data)1059 q_data = np.sqrt(qx_data * qx_data + qy_data * qy_data) 1060 1060 # set all True (standing for unmasked) as default 1061 mask = n umpy.ones(len(qx_data), dtype=bool)1061 mask = np.ones(len(qx_data), dtype=bool) 1062 1062 # store x and y bin centers in q space 1063 1063 x_bins = x 1064 1064 y_bins = y 1065 1065 self.data.source = Source() 1066 self.data.data = n umpy.ones(len(mask))1067 self.data.err_data = n umpy.ones(len(mask))1066 self.data.data = np.ones(len(mask)) 1067 self.data.err_data = np.ones(len(mask)) 1068 1068 self.data.qx_data = qx_data 1069 1069 self.data.qy_data = qy_data … … 1084 1084 :warning: This data is never plotted. 1085 1085 residuals.x = data_copy.x[index] 1086 residuals.dy = n umpy.ones(len(residuals.y))1086 residuals.dy = np.ones(len(residuals.y)) 1087 1087 residuals.dx = None 1088 1088 residuals.dxl = None … … 1091 1091 self.qmax_x = float(self.qmax_ctl.GetValue()) 1092 1092 self.npts_x = int(float(self.npt_ctl.GetValue())) 1093 qmax = self.qmax_x #/ n umpy.sqrt(2)1093 qmax = self.qmax_x #/ np.sqrt(2) 1094 1094 ## Default values 1095 1095 xmax = qmax 1096 1096 xmin = qmax * _Q1D_MIN 1097 1097 qstep = self.npts_x 1098 x = n umpy.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True)1098 x = np.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True) 1099 1099 # store x and y bin centers in q space 1100 1100 #self.data.source = Source() 1101 y = n umpy.ones(len(x))1102 dy = n umpy.zeros(len(x))1103 dx = n umpy.zeros(len(x))1101 y = np.ones(len(x)) 1102 dy = np.zeros(len(x)) 1103 dx = np.zeros(len(x)) 1104 1104 self.data = Data1D(x=x, y=y) 1105 1105 self.data.dx = dx … … 1171 1171 state = None 1172 1172 1173 n umpy.nan_to_num(image)1173 np.nan_to_num(image) 1174 1174 new_plot = Data2D(image=image, err_image=data.err_data) 1175 1175 new_plot.name = model.name + '2d' … … 1640 1640 for key in sld_list.keys(): 1641 1641 if ctr_list[0] == key: 1642 min_val = n umpy.min(sld_list[key])1643 max_val = n umpy.max(sld_list[key])1644 mean_val = n umpy.mean(sld_list[key])1642 min_val = np.min(sld_list[key]) 1643 max_val = np.max(sld_list[key]) 1644 mean_val = np.mean(sld_list[key]) 1645 1645 enable = (min_val == max_val) and \ 1646 1646 sld_data.pix_type == 'pixel' … … 1733 1733 npts = -1 1734 1734 break 1735 if n umpy.isfinite(n_val):1735 if np.isfinite(n_val): 1736 1736 npts *= int(n_val) 1737 1737 if npts > 0: … … 1770 1770 ctl.Refresh() 1771 1771 return 1772 if n umpy.isfinite(s_val):1772 if np.isfinite(s_val): 1773 1773 s_size *= s_val 1774 1774 self.sld_data.set_pixel_volumes(s_size) … … 1787 1787 try: 1788 1788 sld_data = self.parent.get_sld_from_omf() 1789 #nop = (nop * n umpy.pi) / 61789 #nop = (nop * np.pi) / 6 1790 1790 nop = len(sld_data.sld_n) 1791 1791 except: -
TabularUnified src/sas/sasgui/perspectives/fitting/basepage.py ¶
r7a5aedd red2276f 5 5 import os 6 6 import wx 7 import numpy 7 import numpy as np 8 8 import time 9 9 import copy … … 100 100 self.graph_id = None 101 101 # Q range for data set 102 self.qmin_data_set = n umpy.inf102 self.qmin_data_set = np.inf 103 103 self.qmax_data_set = None 104 104 self.npts_data_set = 0 … … 120 120 self.dxw = None 121 121 # pinhole smear 122 self.dx_min = None 123 self.dx_max = None 122 self.dx_percent = None 124 123 # smear attrbs 125 124 self.enable_smearer = None … … 279 278 280 279 """ 281 x = n umpy.linspace(start=self.qmin_x, stop=self.qmax_x,280 x = np.linspace(start=self.qmin_x, stop=self.qmax_x, 282 281 num=self.npts_x, endpoint=True) 283 282 self.data = Data1D(x=x) … … 296 295 """ 297 296 if self.qmin_x >= 1.e-10: 298 qmin = n umpy.log10(self.qmin_x)297 qmin = np.log10(self.qmin_x) 299 298 else: 300 299 qmin = -10. 301 300 302 301 if self.qmax_x <= 1.e10: 303 qmax = n umpy.log10(self.qmax_x)302 qmax = np.log10(self.qmax_x) 304 303 else: 305 304 qmax = 10. 306 305 307 x = n umpy.logspace(start=qmin, stop=qmax,306 x = np.logspace(start=qmin, stop=qmax, 308 307 num=self.npts_x, endpoint=True, base=10.0) 309 308 self.data = Data1D(x=x) … … 342 341 qstep = self.npts_x 343 342 344 x = n umpy.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True)345 y = n umpy.linspace(start=ymin, stop=ymax, num=qstep, endpoint=True)343 x = np.linspace(start=xmin, stop=xmax, num=qstep, endpoint=True) 344 y = np.linspace(start=ymin, stop=ymax, num=qstep, endpoint=True) 346 345 # use data info instead 347 new_x = n umpy.tile(x, (len(y), 1))348 new_y = n umpy.tile(y, (len(x), 1))346 new_x = np.tile(x, (len(y), 1)) 347 new_y = np.tile(y, (len(x), 1)) 349 348 new_y = new_y.swapaxes(0, 1) 350 349 # all data reuire now in 1d array 351 350 qx_data = new_x.flatten() 352 351 qy_data = new_y.flatten() 353 q_data = n umpy.sqrt(qx_data * qx_data + qy_data * qy_data)352 q_data = np.sqrt(qx_data * qx_data + qy_data * qy_data) 354 353 # set all True (standing for unmasked) as default 355 mask = n umpy.ones(len(qx_data), dtype=bool)354 mask = np.ones(len(qx_data), dtype=bool) 356 355 # store x and y bin centers in q space 357 356 x_bins = x … … 359 358 360 359 self.data.source = Source() 361 self.data.data = n umpy.ones(len(mask))362 self.data.err_data = n umpy.ones(len(mask))360 self.data.data = np.ones(len(mask)) 361 self.data.err_data = np.ones(len(mask)) 363 362 self.data.qx_data = qx_data 364 363 self.data.qy_data = qy_data … … 784 783 # Skip non-data lines 785 784 logging.error(traceback.format_exc()) 786 return n umpy.array(angles), numpy.array(weights)785 return np.array(angles), np.array(weights) 787 786 except: 788 787 raise … … 849 848 self.state.pinhole_smearer = \ 850 849 copy.deepcopy(self.pinhole_smearer.GetValue()) 851 self.state.dx_max = copy.deepcopy(self.dx_max) 852 self.state.dx_min = copy.deepcopy(self.dx_min) 850 self.state.dx_percent = copy.deepcopy(self.dx_percent) 853 851 self.state.dxl = copy.deepcopy(self.dxl) 854 852 self.state.dxw = copy.deepcopy(self.dxw) … … 1247 1245 # we have two more options for smearing 1248 1246 if self.pinhole_smearer.GetValue(): 1249 self.dx_min = state.dx_min 1250 self.dx_max = state.dx_max 1251 if self.dx_min is not None: 1252 self.smear_pinhole_min.SetValue(str(self.dx_min)) 1253 if self.dx_max is not None: 1254 self.smear_pinhole_max.SetValue(str(self.dx_max)) 1247 self.dx_percent = state.dx_percent 1248 if self.dx_percent is not None: 1249 if state.dx_old: 1250 self.dx_percent = 100 * (self.dx_percent / self.data.x[0]) 1251 self.smear_pinhole_percent.SetValue("%.2f" % self.dx_percent) 1255 1252 self.onPinholeSmear(event=None) 1256 1253 elif self.slit_smearer.GetValue(): … … 1452 1449 self.state_change = True 1453 1450 self._draw_model() 1454 # Time delay has been introduced to prevent _handle error1455 # on Windows1456 # This part of code is executed when model is selected and1457 # it's parameters are changed (with respect to previously1458 # selected model). There are two Iq evaluations occuring one1459 # after another and therefore there may be compilation error1460 # if model is calculated for the first time.1461 # This seems to be Windows only issue - haven't tested on Linux1462 # though.The proper solution (other than time delay) requires1463 # more fundemental code refatoring1464 # Wojtek P. Nov 7, 20161465 if not ON_MAC:1466 time.sleep(0.1)1467 1451 self.Refresh() 1468 1452 … … 2123 2107 for data in self.data_list: 2124 2108 # q value from qx and qy 2125 radius = n umpy.sqrt(data.qx_data * data.qx_data +2109 radius = np.sqrt(data.qx_data * data.qx_data + 2126 2110 data.qy_data * data.qy_data) 2127 2111 # get unmasked index … … 2129 2113 (radius <= float(self.qmax.GetValue())) 2130 2114 index_data = (index_data) & (data.mask) 2131 index_data = (index_data) & (n umpy.isfinite(data.data))2115 index_data = (index_data) & (np.isfinite(data.data)) 2132 2116 2133 2117 if len(index_data[index_data]) < 10: … … 2164 2148 index_data = (float(self.qmin.GetValue()) <= radius) & \ 2165 2149 (radius <= float(self.qmax.GetValue())) 2166 index_data = (index_data) & (n umpy.isfinite(data.y))2150 index_data = (index_data) & (np.isfinite(data.y)) 2167 2151 2168 2152 if len(index_data[index_data]) < 5: … … 2236 2220 2237 2221 # Check that min is less than max 2238 low = -n umpy.inf if min_str == "" else float(min_str)2239 high = n umpy.inf if max_str == "" else float(max_str)2222 low = -np.inf if min_str == "" else float(min_str) 2223 high = np.inf if max_str == "" else float(max_str) 2240 2224 if high < low: 2241 2225 min_ctrl.SetBackgroundColour("pink") … … 2612 2596 Layout is called after fitting. 2613 2597 """ 2614 self._sleep4sec()2615 2598 self.Layout() 2616 2599 return 2617 2618 def _sleep4sec(self):2619 """2620 sleep for 1 sec only applied on Mac2621 Note: This 1sec helps for Mac not to crash on self.2622 Layout after self._draw_model2623 """2624 if ON_MAC:2625 time.sleep(1)2626 2600 2627 2601 def _find_polyfunc_selection(self, disp_func=None): … … 2657 2631 self.qmin_x = data_min 2658 2632 self.qmax_x = math.sqrt(x * x + y * y) 2659 # self.data.mask = n umpy.ones(len(self.data.data),dtype=bool)2633 # self.data.mask = np.ones(len(self.data.data),dtype=bool) 2660 2634 # check smearing 2661 2635 if not self.disable_smearer.GetValue(): … … 3369 3343 3370 3344 if value[1] == 'array': 3371 pd_vals = n umpy.array(value[2])3372 pd_weights = n umpy.array(value[3])3345 pd_vals = np.array(value[2]) 3346 pd_weights = np.array(value[3]) 3373 3347 if len(pd_vals) == 0 or len(pd_vals) != len(pd_weights): 3374 3348 msg = ("bad array distribution parameters for %s" -
TabularUnified src/sas/sasgui/perspectives/fitting/fitpage.py ¶
r8c0d9eb red2276f 6 6 import wx 7 7 import wx.lib.newevent 8 import numpy 8 import numpy as np 9 9 import copy 10 10 import math … … 29 29 _BOX_WIDTH = 76 30 30 _DATA_BOX_WIDTH = 300 31 SMEAR_SIZE_L = 0.0032 31 SMEAR_SIZE_H = 0.00 33 32 CUSTOM_MODEL = 'Plugin Models' … … 210 209 "Please enter only the value of interest to customize smearing..." 211 210 smear_message_new_psmear = \ 212 "Please enter both; the dQ will be generated by interpolation..."211 "Please enter a fixed percentage to be applied to all Q values..." 213 212 smear_message_2d_x_title = "<dQp>[1/A]:" 214 213 smear_message_2d_y_title = "<dQs>[1/A]:" 215 smear_message_pinhole_min_title = "dQ_low[1/A]:" 216 smear_message_pinhole_max_title = "dQ_high[1/A]:" 214 smear_message_pinhole_percent_title = "dQ[%]:" 217 215 smear_message_slit_height_title = "Slit height[1/A]:" 218 216 smear_message_slit_width_title = "Slit width[1/A]:" … … 307 305 308 306 # textcntrl for custom resolution 309 self.smear_pinhole_max = ModelTextCtrl(self, wx.ID_ANY, 310 size=(_BOX_WIDTH - 25, 20), 311 style=wx.TE_PROCESS_ENTER, 312 text_enter_callback=self.onPinholeSmear) 313 self.smear_pinhole_min = ModelTextCtrl(self, wx.ID_ANY, 314 size=(_BOX_WIDTH - 25, 20), 315 style=wx.TE_PROCESS_ENTER, 316 text_enter_callback=self.onPinholeSmear) 307 self.smear_pinhole_percent = ModelTextCtrl(self, wx.ID_ANY, 308 size=(_BOX_WIDTH - 25, 20), 309 style=wx.TE_PROCESS_ENTER, 310 text_enter_callback= 311 self.onPinholeSmear) 317 312 self.smear_slit_height = ModelTextCtrl(self, wx.ID_ANY, 318 313 size=(_BOX_WIDTH - 25, 20), … … 333 328 334 329 # set default values for smear 335 self.smear_pinhole_max.SetValue(str(self.dx_max)) 336 self.smear_pinhole_min.SetValue(str(self.dx_min)) 330 self.smear_pinhole_percent.SetValue(str(self.dx_percent)) 337 331 self.smear_slit_height.SetValue(str(self.dxl)) 338 332 self.smear_slit_width.SetValue(str(self.dxw)) … … 426 420 self.smear_description_2d_y.SetToolTipString( 427 421 " dQs(perpendicular) in q_phi direction.") 428 self.smear_description_pin_min = wx.StaticText(self, wx.ID_ANY, 429 smear_message_pinhole_min_title, style=wx.ALIGN_LEFT) 430 self.smear_description_pin_max = wx.StaticText(self, wx.ID_ANY, 431 smear_message_pinhole_max_title, style=wx.ALIGN_LEFT) 422 self.smear_description_pin_percent = wx.StaticText(self, wx.ID_ANY, 423 smear_message_pinhole_percent_title, 424 style=wx.ALIGN_LEFT) 432 425 self.smear_description_slit_height = wx.StaticText(self, wx.ID_ANY, 433 426 smear_message_slit_height_title, style=wx.ALIGN_LEFT) … … 453 446 self.sizer_new_smear.Add((15, -1)) 454 447 self.sizer_new_smear.Add(self.smear_description_2d_x, 0, wx.CENTER, 10) 455 self.sizer_new_smear.Add(self.smear_description_pin_min,456 0, wx.CENTER, 10)457 448 self.sizer_new_smear.Add(self.smear_description_slit_height, 458 449 0, wx.CENTER, 10) 459 450 460 self.sizer_new_smear.Add(self.smear_pinhole_min, 0, wx.CENTER, 10)461 451 self.sizer_new_smear.Add(self.smear_slit_height, 0, wx.CENTER, 10) 462 452 self.sizer_new_smear.Add(self.smear_data_left, 0, wx.CENTER, 10) … … 464 454 self.sizer_new_smear.Add(self.smear_description_2d_y, 465 455 0, wx.CENTER, 10) 466 self.sizer_new_smear.Add(self.smear_description_pin_ max,456 self.sizer_new_smear.Add(self.smear_description_pin_percent, 467 457 0, wx.CENTER, 10) 468 458 self.sizer_new_smear.Add(self.smear_description_slit_width, 469 459 0, wx.CENTER, 10) 470 460 471 self.sizer_new_smear.Add(self.smear_pinhole_ max, 0, wx.CENTER, 10)461 self.sizer_new_smear.Add(self.smear_pinhole_percent, 0, wx.CENTER, 10) 472 462 self.sizer_new_smear.Add(self.smear_slit_width, 0, wx.CENTER, 10) 473 463 self.sizer_new_smear.Add(self.smear_data_right, 0, wx.CENTER, 10) … … 1125 1115 if item.GetValue(): 1126 1116 if button_list.index(item) == 0: 1127 flag = 0 # dy = n umpy.ones_like(dy_data)1117 flag = 0 # dy = np.ones_like(dy_data) 1128 1118 elif button_list.index(item) == 1: 1129 1119 flag = 1 # dy = dy_data 1130 1120 elif button_list.index(item) == 2: 1131 flag = 2 # dy = n umpy.sqrt(numpy.abs(data))1121 flag = 2 # dy = np.sqrt(np.abs(data)) 1132 1122 elif button_list.index(item) == 3: 1133 flag = 3 # dy = n umpy.abs(data)1123 flag = 3 # dy = np.abs(data) 1134 1124 break 1135 1125 return flag … … 1432 1422 key = event.GetKeyCode() 1433 1423 length = len(self.data.x) 1434 indx = (n umpy.abs(self.data.x - x_data)).argmin()1424 indx = (np.abs(self.data.x - x_data)).argmin() 1435 1425 # return array.flat[idx] 1436 1426 if key == wx.WXK_PAGEUP or key == wx.WXK_NUMPAD_PAGEUP: … … 1487 1477 self.enable2D: 1488 1478 # set mask 1489 radius = n umpy.sqrt(self.data.qx_data * self.data.qx_data +1479 radius = np.sqrt(self.data.qx_data * self.data.qx_data + 1490 1480 self.data.qy_data * self.data.qy_data) 1491 1481 index_data = ((self.qmin_x <= radius) & (radius <= self.qmax_x)) 1492 1482 index_data = (index_data) & (self.data.mask) 1493 index_data = (index_data) & (n umpy.isfinite(self.data.data))1483 index_data = (index_data) & (np.isfinite(self.data.data)) 1494 1484 if len(index_data[index_data]) < 10: 1495 1485 msg = "Cannot Plot :No or too little npts in" … … 1581 1571 if self.dxw is None: 1582 1572 self.dxw = "" 1583 if self.dx_min is None: 1584 self.dx_min = SMEAR_SIZE_L 1585 if self.dx_max is None: 1586 self.dx_max = SMEAR_SIZE_H 1573 if self.dx_percent is None: 1574 self.dx_percent = SMEAR_SIZE_H 1587 1575 1588 1576 def _get_smear_info(self): … … 1610 1598 and data.dqx_data.any() != 0: 1611 1599 self.smear_type = "Pinhole2d" 1612 self.dq_l = format_number(n umpy.average(data.dqx_data))1613 self.dq_r = format_number(n umpy.average(data.dqy_data))1600 self.dq_l = format_number(np.average(data.dqx_data)) 1601 self.dq_r = format_number(np.average(data.dqy_data)) 1614 1602 return 1615 1603 else: 1616 1604 return 1617 1605 # check if it is pinhole smear and get min max if it is. 1618 if data.dx is not None and n umpy.any(data.dx):1606 if data.dx is not None and np.any(data.dx): 1619 1607 self.smear_type = "Pinhole" 1620 1608 self.dq_l = data.dx[0] … … 1624 1612 elif data.dxl is not None or data.dxw is not None: 1625 1613 self.smear_type = "Slit" 1626 if data.dxl is not None and n umpy.all(data.dxl, 0):1614 if data.dxl is not None and np.all(data.dxl, 0): 1627 1615 self.dq_l = data.dxl[0] 1628 if data.dxw is not None and n umpy.all(data.dxw, 0):1616 if data.dxw is not None and np.all(data.dxw, 0): 1629 1617 self.dq_r = data.dxw[0] 1630 1618 # return self.smear_type,self.dq_l,self.dq_r … … 1646 1634 self.smear_description_2d_y.Show(True) 1647 1635 if self.pinhole_smearer.GetValue(): 1648 self.smear_pinhole_min.Show(True) 1649 self.smear_pinhole_max.Show(True) 1636 self.smear_pinhole_percent.Show(True) 1650 1637 # smear from data 1651 1638 elif self.enable_smearer.GetValue(): … … 1658 1645 self.smear_description_slit_width.Show(True) 1659 1646 elif self.smear_type == 'Pinhole': 1660 self.smear_description_pin_min.Show(True) 1661 self.smear_description_pin_max.Show(True) 1647 self.smear_description_pin_percent.Show(True) 1662 1648 self.smear_description_smear_type.Show(True) 1663 1649 self.smear_description_type.Show(True) … … 1668 1654 if self.smear_type == 'Pinhole': 1669 1655 self.smear_message_new_p.Show(True) 1670 self.smear_description_pin_min.Show(True) 1671 self.smear_description_pin_max.Show(True) 1672 1673 self.smear_pinhole_min.Show(True) 1674 self.smear_pinhole_max.Show(True) 1656 self.smear_description_pin_percent.Show(True) 1657 1658 self.smear_pinhole_percent.Show(True) 1675 1659 # custom slit smear 1676 1660 elif self.slit_smearer.GetValue(): … … 1697 1681 self.smear_data_left.Hide() 1698 1682 self.smear_data_right.Hide() 1699 self.smear_description_pin_min.Hide() 1700 self.smear_pinhole_min.Hide() 1701 self.smear_description_pin_max.Hide() 1702 self.smear_pinhole_max.Hide() 1683 self.smear_description_pin_percent.Hide() 1684 self.smear_pinhole_percent.Hide() 1703 1685 self.smear_description_slit_height.Hide() 1704 1686 self.smear_slit_height.Hide() … … 1826 1808 if not flag: 1827 1809 self.onSmear(None) 1828 1829 def _mac_sleep(self, sec=0.2):1830 """1831 Give sleep to MAC1832 """1833 if self.is_mac:1834 time.sleep(sec)1835 1810 1836 1811 def get_view_mode(self): … … 1939 1914 self.default_mask = copy.deepcopy(self.data.mask) 1940 1915 if self.data.err_data is not None \ 1941 and n umpy.any(self.data.err_data):1916 and np.any(self.data.err_data): 1942 1917 di_flag = True 1943 1918 if self.data.dqx_data is not None \ 1944 and n umpy.any(self.data.dqx_data):1919 and np.any(self.data.dqx_data): 1945 1920 dq_flag = True 1946 1921 else: 1947 1922 self.slit_smearer.Enable(True) 1948 1923 self.pinhole_smearer.Enable(True) 1949 if self.data.dy is not None and n umpy.any(self.data.dy):1924 if self.data.dy is not None and np.any(self.data.dy): 1950 1925 di_flag = True 1951 if self.data.dx is not None and n umpy.any(self.data.dx):1926 if self.data.dx is not None and np.any(self.data.dx): 1952 1927 dq_flag = True 1953 elif self.data.dxl is not None and n umpy.any(self.data.dxl):1928 elif self.data.dxl is not None and np.any(self.data.dxl): 1954 1929 dq_flag = True 1955 1930 … … 2085 2060 if self.data.__class__.__name__ == "Data2D" or \ 2086 2061 self.enable2D: 2087 radius = n umpy.sqrt(self.data.qx_data * self.data.qx_data +2062 radius = np.sqrt(self.data.qx_data * self.data.qx_data + 2088 2063 self.data.qy_data * self.data.qy_data) 2089 2064 index_data = (self.qmin_x <= radius) & (radius <= self.qmax_x) 2090 2065 index_data = (index_data) & (self.data.mask) 2091 index_data = (index_data) & (n umpy.isfinite(self.data.data))2066 index_data = (index_data) & (np.isfinite(self.data.data)) 2092 2067 npts2fit = len(self.data.data[index_data]) 2093 2068 else: … … 2122 2097 # make sure stop button to fit button all the time 2123 2098 self._on_fit_complete() 2124 if out is None or not n umpy.isfinite(chisqr):2099 if out is None or not np.isfinite(chisqr): 2125 2100 raise ValueError, "Fit error occured..." 2126 2101 … … 2133 2108 2134 2109 # Check if chi2 is finite 2135 if chisqr is not None and n umpy.isfinite(chisqr):2110 if chisqr is not None and np.isfinite(chisqr): 2136 2111 # format chi2 2137 2112 chi2 = format_number(chisqr, True) … … 2185 2160 2186 2161 if cov[ind] is not None: 2187 if n umpy.isfinite(float(cov[ind])):2162 if np.isfinite(float(cov[ind])): 2188 2163 val_err = format_number(cov[ind], True) 2189 2164 item[4].SetForegroundColour(wx.BLACK) … … 2206 2181 self.save_current_state() 2207 2182 2208 if not self.is_mac:2209 self.Layout()2210 self.Refresh()2211 self._mac_sleep(0.1)2212 2183 # plot model ( when drawing, do not update chisqr value again) 2213 2184 self._draw_model(update_chisqr=False, source='fit') … … 2249 2220 # event case of radio button 2250 2221 if tcrtl.GetValue(): 2251 self.dx_min = 0.0 2252 self.dx_max = 0.0 2222 self.dx_percent = 0.0 2253 2223 is_new_pinhole = True 2254 2224 else: … … 2287 2257 """ 2288 2258 # get the values 2289 pin_min = self.smear_pinhole_min.GetValue() 2290 pin_max = self.smear_pinhole_max.GetValue() 2291 2292 # Check changes in slit width 2259 pin_percent = self.smear_pinhole_percent.GetValue() 2260 2261 # Check changes in slit heigth 2293 2262 try: 2294 dx_ min = float(pin_min)2263 dx_percent = float(pin_percent) 2295 2264 except: 2296 2265 return True 2297 if self.dx_min != dx_min: 2298 return True 2299 2300 # Check changes in slit heigth 2301 try: 2302 dx_max = float(pin_max) 2303 except: 2304 return True 2305 if self.dx_max != dx_max: 2266 if self.dx_percent != dx_percent: 2306 2267 return True 2307 2268 return False … … 2319 2280 self.smear_type = 'Pinhole2d' 2320 2281 len_data = len(data.data) 2321 data.dqx_data = n umpy.zeros(len_data)2322 data.dqy_data = n umpy.zeros(len_data)2282 data.dqx_data = np.zeros(len_data) 2283 data.dqy_data = np.zeros(len_data) 2323 2284 else: 2324 2285 self.smear_type = 'Pinhole' 2325 2286 len_data = len(data.x) 2326 data.dx = n umpy.zeros(len_data)2287 data.dx = np.zeros(len_data) 2327 2288 data.dxl = None 2328 2289 data.dxw = None 2329 2290 msg = None 2330 2291 2331 get_pin_min = self.smear_pinhole_min 2332 get_pin_max = self.smear_pinhole_max 2333 2334 if not check_float(get_pin_min): 2335 get_pin_min.SetBackgroundColour("pink") 2336 msg = "Model Error:wrong value entered!!!" 2337 elif not check_float(get_pin_max): 2338 get_pin_max.SetBackgroundColour("pink") 2292 get_pin_percent = self.smear_pinhole_percent 2293 2294 if not check_float(get_pin_percent): 2295 get_pin_percent.SetBackgroundColour("pink") 2339 2296 msg = "Model Error:wrong value entered!!!" 2340 2297 else: 2341 2298 if len_data < 2: 2342 2299 len_data = 2 2343 self.dx_min = float(get_pin_min.GetValue()) 2344 self.dx_max = float(get_pin_max.GetValue()) 2345 if self.dx_min < 0: 2346 get_pin_min.SetBackgroundColour("pink") 2300 self.dx_percent = float(get_pin_percent.GetValue()) 2301 if self.dx_percent < 0: 2302 get_pin_percent.SetBackgroundColour("pink") 2347 2303 msg = "Model Error:This value can not be negative!!!" 2348 elif self.dx_max < 0: 2349 get_pin_max.SetBackgroundColour("pink") 2350 msg = "Model Error:This value can not be negative!!!" 2351 elif self.dx_min is not None and self.dx_max is not None: 2304 elif self.dx_percent is not None: 2305 percent = self.dx_percent/100 2352 2306 if self._is_2D(): 2353 data.dqx_data[data.dqx_data == 0] = self.dx_min 2354 data.dqy_data[data.dqy_data == 0] = self.dx_max 2355 elif self.dx_min == self.dx_max: 2356 data.dx[data.dx == 0] = self.dx_min 2307 data.dqx_data[data.dqx_data == 0] = percent * data.qx_data 2308 data.dqy_data[data.dqy_data == 0] = percent * data.qy_data 2357 2309 else: 2358 step = (self.dx_max - self.dx_min) / (len_data - 1) 2359 data.dx = numpy.arange(self.dx_min, 2360 self.dx_max + step / 1.1, 2361 step) 2362 elif self.dx_min is not None: 2363 if self._is_2D(): 2364 data.dqx_data[data.dqx_data == 0] = self.dx_min 2365 else: 2366 data.dx[data.dx == 0] = self.dx_min 2367 elif self.dx_max is not None: 2368 if self._is_2D(): 2369 data.dqy_data[data.dqy_data == 0] = self.dx_max 2370 else: 2371 data.dx[data.dx == 0] = self.dx_max 2310 data.dx = percent * data.x 2372 2311 self.current_smearer = smear_selection(data, self.model) 2373 2312 # 2D need to set accuracy … … 2379 2318 wx.PostEvent(self._manager.parent, StatusEvent(status=msg)) 2380 2319 else: 2381 get_pin_min.SetBackgroundColour("white") 2382 get_pin_max.SetBackgroundColour("white") 2320 get_pin_percent.SetBackgroundColour("white") 2383 2321 # set smearing value whether or not the data contain the smearing info 2384 2322 … … 2520 2458 try: 2521 2459 self.dxl = float(self.smear_slit_height.GetValue()) 2522 data.dxl = self.dxl * n umpy.ones(data_len)2460 data.dxl = self.dxl * np.ones(data_len) 2523 2461 self.smear_slit_height.SetBackgroundColour(wx.WHITE) 2524 2462 except: 2525 2463 self.dxl = None 2526 data.dxl = n umpy.zeros(data_len)2464 data.dxl = np.zeros(data_len) 2527 2465 if self.smear_slit_height.GetValue().lstrip().rstrip() != "": 2528 2466 self.smear_slit_height.SetBackgroundColour("pink") … … 2533 2471 self.dxw = float(self.smear_slit_width.GetValue()) 2534 2472 self.smear_slit_width.SetBackgroundColour(wx.WHITE) 2535 data.dxw = self.dxw * n umpy.ones(data_len)2473 data.dxw = self.dxw * np.ones(data_len) 2536 2474 except: 2537 2475 self.dxw = None 2538 data.dxw = n umpy.zeros(data_len)2476 data.dxw = np.zeros(data_len) 2539 2477 if self.smear_slit_width.GetValue().lstrip().rstrip() != "": 2540 2478 self.smear_slit_width.SetBackgroundColour("pink") … … 2663 2601 if event is None: 2664 2602 output = "-" 2665 elif not n umpy.isfinite(event.output):2603 elif not np.isfinite(event.output): 2666 2604 output = "-" 2667 2605 else: -
TabularUnified src/sas/sasgui/perspectives/fitting/fitting.py ¶
rddbac66 red2276f 16 16 import wx 17 17 import logging 18 import numpy 18 import numpy as np 19 19 import time 20 20 from copy import deepcopy … … 876 876 qmin=qmin, qmax=qmax, weight=weight) 877 877 878 def _mac_sleep(self, sec=0.2):879 """880 Give sleep to MAC881 """882 if ON_MAC:883 time.sleep(sec)884 885 878 def draw_model(self, model, page_id, data=None, smearer=None, 886 879 enable1D=True, enable2D=False, … … 1030 1023 manager=self, 1031 1024 improvement_delta=0.1) 1032 self._mac_sleep(0.2)1033 1025 1034 1026 # batch fit … … 1270 1262 :param elapsed: time spent at the fitting level 1271 1263 """ 1272 self._mac_sleep(0.2)1273 1264 uid = page_id[0] 1274 1265 if uid in self.fit_thread_list.keys(): … … 1332 1323 new_theory = copy_data.data 1333 1324 new_theory[res.index] = res.theory 1334 new_theory[res.index == False] = n umpy.nan1325 new_theory[res.index == False] = np.nan 1335 1326 correct_result = True 1336 1327 #get all fittable parameters of the current model … … 1341 1332 param_list.remove(param) 1342 1333 if not correct_result or res.fitness is None or \ 1343 not n umpy.isfinite(res.fitness) or \1344 numpy.any(res.pvec == None) or not \1345 numpy.all(numpy.isfinite(res.pvec)):1334 not np.isfinite(res.fitness) or \ 1335 np.any(res.pvec == None) or not \ 1336 np.all(np.isfinite(res.pvec)): 1346 1337 data_name = str(None) 1347 1338 if data is not None: … … 1352 1343 msg += "Data %s and Model %s did not fit.\n" % (data_name, 1353 1344 model_name) 1354 ERROR = n umpy.NAN1345 ERROR = np.NAN 1355 1346 cell = BatchCell() 1356 1347 cell.label = res.fitness … … 1366 1357 batch_inputs["error on %s" % str(param)].append(ERROR) 1367 1358 else: 1368 # TODO: Why sometimes res.pvec comes with n umpy.float64?1359 # TODO: Why sometimes res.pvec comes with np.float64? 1369 1360 # probably from scipy lmfit 1370 if res.pvec.__class__ == n umpy.float64:1361 if res.pvec.__class__ == np.float64: 1371 1362 res.pvec = [res.pvec] 1372 1363 … … 1520 1511 page_id = [] 1521 1512 ## fit more than 1 model at the same time 1522 self._mac_sleep(0.2)1523 1513 try: 1524 1514 index = 0 … … 1533 1523 fit_msg = res.mesg 1534 1524 if res.fitness is None or \ 1535 not n umpy.isfinite(res.fitness) or \1536 numpy.any(res.pvec == None) or \1537 not n umpy.all(numpy.isfinite(res.pvec)):1525 not np.isfinite(res.fitness) or \ 1526 np.any(res.pvec == None) or \ 1527 not np.all(np.isfinite(res.pvec)): 1538 1528 fit_msg += "\nFitting did not converge!!!" 1539 1529 wx.CallAfter(self._update_fit_button, page_id) 1540 1530 else: 1541 1531 #set the panel when fit result are float not list 1542 if res.pvec.__class__ == n umpy.float64:1532 if res.pvec.__class__ == np.float64: 1543 1533 pvec = [res.pvec] 1544 1534 else: 1545 1535 pvec = res.pvec 1546 if res.stderr.__class__ == n umpy.float64:1536 if res.stderr.__class__ == np.float64: 1547 1537 stderr = [res.stderr] 1548 1538 else: … … 1692 1682 if dy is None: 1693 1683 new_plot.is_data = False 1694 new_plot.dy = n umpy.zeros(len(y))1684 new_plot.dy = np.zeros(len(y)) 1695 1685 # If this is a theory curve, pick the proper symbol to make it a curve 1696 1686 new_plot.symbol = GUIFRAME_ID.CURVE_SYMBOL_NUM … … 1741 1731 """ 1742 1732 try: 1743 n umpy.nan_to_num(y)1733 np.nan_to_num(y) 1744 1734 new_plot = self.create_theory_1D(x, y, page_id, model, data, state, 1745 1735 data_description=model.name, … … 1755 1745 data_id="Data " + data.name + " unsmeared", 1756 1746 dy=unsmeared_error) 1757 1758 if sq_model is not None and pq_model is not None: 1759 self.create_theory_1D(x, sq_model, page_id, model, data, state, 1760 data_description=model.name + " S(q)", 1761 data_id=str(page_id) + " " + data.name + " S(q)") 1762 self.create_theory_1D(x, pq_model, page_id, model, data, state, 1763 data_description=model.name + " P(q)", 1764 data_id=str(page_id) + " " + data.name + " P(q)") 1765 1747 # Comment this out until we can get P*S models with correctly populated parameters 1748 #if sq_model is not None and pq_model is not None: 1749 # self.create_theory_1D(x, sq_model, page_id, model, data, state, 1750 # data_description=model.name + " S(q)", 1751 # data_id=str(page_id) + " " + data.name + " S(q)") 1752 # self.create_theory_1D(x, pq_model, page_id, model, data, state, 1753 # data_description=model.name + " P(q)", 1754 # data_id=str(page_id) + " " + data.name + " P(q)") 1766 1755 1767 1756 current_pg = self.fit_panel.get_page_by_id(page_id) … … 1826 1815 that can be plot. 1827 1816 """ 1828 n umpy.nan_to_num(image)1817 np.nan_to_num(image) 1829 1818 new_plot = Data2D(image=image, err_image=data.err_data) 1830 1819 new_plot.name = model.name + '2d' … … 2018 2007 if data_copy.__class__.__name__ == "Data2D": 2019 2008 if index == None: 2020 index = n umpy.ones(len(data_copy.data), dtype=bool)2009 index = np.ones(len(data_copy.data), dtype=bool) 2021 2010 if weight != None: 2022 2011 data_copy.err_data = weight 2023 2012 # get rid of zero error points 2024 2013 index = index & (data_copy.err_data != 0) 2025 index = index & (n umpy.isfinite(data_copy.data))2014 index = index & (np.isfinite(data_copy.data)) 2026 2015 fn = data_copy.data[index] 2027 2016 theory_data = self.page_finder[page_id].get_theory_data(fid=data_copy.id) … … 2033 2022 # 1 d theory from model_thread is only in the range of index 2034 2023 if index == None: 2035 index = n umpy.ones(len(data_copy.y), dtype=bool)2024 index = np.ones(len(data_copy.y), dtype=bool) 2036 2025 if weight != None: 2037 2026 data_copy.dy = weight 2038 2027 if data_copy.dy == None or data_copy.dy == []: 2039 dy = n umpy.ones(len(data_copy.y))2028 dy = np.ones(len(data_copy.y)) 2040 2029 else: 2041 2030 ## Set consistently w/AbstractFitengine: … … 2058 2047 return 2059 2048 2060 residuals = res[n umpy.isfinite(res)]2049 residuals = res[np.isfinite(res)] 2061 2050 # get chisqr only w/finite 2062 chisqr = n umpy.average(residuals * residuals)2051 chisqr = np.average(residuals * residuals) 2063 2052 2064 2053 self._plot_residuals(page_id=page_id, data=data_copy, … … 2097 2086 residuals.qy_data = data_copy.qy_data 2098 2087 residuals.q_data = data_copy.q_data 2099 residuals.err_data = n umpy.ones(len(residuals.data))2088 residuals.err_data = np.ones(len(residuals.data)) 2100 2089 residuals.xmin = min(residuals.qx_data) 2101 2090 residuals.xmax = max(residuals.qx_data) … … 2111 2100 # 1 d theory from model_thread is only in the range of index 2112 2101 if data_copy.dy == None or data_copy.dy == []: 2113 dy = n umpy.ones(len(data_copy.y))2102 dy = np.ones(len(data_copy.y)) 2114 2103 else: 2115 2104 if weight == None: 2116 dy = n umpy.ones(len(data_copy.y))2105 dy = np.ones(len(data_copy.y)) 2117 2106 ## Set consitently w/AbstractFitengine: 2118 2107 ## But this should be corrected later. … … 2133 2122 residuals.y = (fn - gn[index]) / en 2134 2123 residuals.x = data_copy.x[index] 2135 residuals.dy = n umpy.ones(len(residuals.y))2124 residuals.dy = np.ones(len(residuals.y)) 2136 2125 residuals.dx = None 2137 2126 residuals.dxl = None -
TabularUnified src/sas/sasgui/perspectives/fitting/media/plugin.rst ¶
r5295cf5 r984f3fc 364 364 - the limits will show up as the default limits for the fit making it easy, 365 365 for example, to force the radius to always be greater than zero. 366 367 - these are hard limits defining the valid range of parameter values; 368 polydisperity distributions will be truncated at the limits. 366 369 367 370 - **"type"** can be one of: "", "sld", "volume", or "orientation". -
TabularUnified src/sas/sasgui/perspectives/fitting/model_thread.py ¶
rc1c9929 r9a5097c 4 4 5 5 import time 6 import numpy 6 import numpy as np 7 7 import math 8 8 from sas.sascalc.data_util.calcthread import CalcThread … … 68 68 69 69 # Define matrix where data will be plotted 70 radius = n umpy.sqrt((self.data.qx_data * self.data.qx_data) + \70 radius = np.sqrt((self.data.qx_data * self.data.qx_data) + \ 71 71 (self.data.qy_data * self.data.qy_data)) 72 72 … … 75 75 index_model = (self.qmin <= radius) & (radius <= self.qmax) 76 76 index_model = index_model & self.data.mask 77 index_model = index_model & n umpy.isfinite(self.data.data)77 index_model = index_model & np.isfinite(self.data.data) 78 78 79 79 if self.smearer is not None: … … 91 91 self.data.qy_data[index_model] 92 92 ]) 93 output = n umpy.zeros(len(self.data.qx_data))93 output = np.zeros(len(self.data.qx_data)) 94 94 # output default is None 95 95 # This method is to distinguish between masked … … 163 163 """ 164 164 self.starttime = time.time() 165 output = n umpy.zeros((len(self.data.x)))165 output = np.zeros((len(self.data.x))) 166 166 index = (self.qmin <= self.data.x) & (self.data.x <= self.qmax) 167 167 … … 175 175 self.qmax) 176 176 mask = self.data.x[first_bin:last_bin+1] 177 unsmeared_output = n umpy.zeros((len(self.data.x)))177 unsmeared_output = np.zeros((len(self.data.x))) 178 178 unsmeared_output[first_bin:last_bin+1] = self.model.evalDistribution(mask) 179 179 self.smearer.model = self.model … … 183 183 # Check that the arrays are compatible. If we only have a model but no data, 184 184 # the length of data.y will be zero. 185 if isinstance(self.data.y, n umpy.ndarray) and output.shape == self.data.y.shape:186 unsmeared_data = n umpy.zeros((len(self.data.x)))187 unsmeared_error = n umpy.zeros((len(self.data.x)))185 if isinstance(self.data.y, np.ndarray) and output.shape == self.data.y.shape: 186 unsmeared_data = np.zeros((len(self.data.x))) 187 unsmeared_error = np.zeros((len(self.data.x))) 188 188 unsmeared_data[first_bin:last_bin+1] = self.data.y[first_bin:last_bin+1]\ 189 189 * unsmeared_output[first_bin:last_bin+1]\ … … 209 209 210 210 if p_model is not None and s_model is not None: 211 sq_values = n umpy.zeros((len(self.data.x)))212 pq_values = n umpy.zeros((len(self.data.x)))211 sq_values = np.zeros((len(self.data.x))) 212 pq_values = np.zeros((len(self.data.x))) 213 213 sq_values[index] = s_model.evalDistribution(self.data.x[index]) 214 214 pq_values[index] = p_model.evalDistribution(self.data.x[index]) -
TabularUnified src/sas/sasgui/perspectives/fitting/pagestate.py ¶
r71601312 red2276f 18 18 import copy 19 19 import logging 20 import numpy 20 import numpy as np 21 21 import traceback 22 22 … … 74 74 ["dq_l", "dq_l", "float"], 75 75 ["dq_r", "dq_r", "float"], 76 ["dx_max", "dx_max", "float"], 77 ["dx_min", "dx_min", "float"], 76 ["dx_percent", "dx_percent", "float"], 78 77 ["dxl", "dxl", "float"], 79 78 ["dxw", "dxw", "float"]] … … 215 214 self.dq_l = None 216 215 self.dq_r = None 217 self.dx_ max= None218 self.dx_ min = None216 self.dx_percent = None 217 self.dx_old = False 219 218 self.dxl = None 220 219 self.dxw = None … … 343 342 obj.dq_l = copy.deepcopy(self.dq_l) 344 343 obj.dq_r = copy.deepcopy(self.dq_r) 345 obj.dx_ max = copy.deepcopy(self.dx_max)346 obj.dx_ min = copy.deepcopy(self.dx_min)344 obj.dx_percent = copy.deepcopy(self.dx_percent) 345 obj.dx_old = copy.deepcopy(self.dx_old) 347 346 obj.dxl = copy.deepcopy(self.dxl) 348 347 obj.dxw = copy.deepcopy(self.dxw) … … 411 410 for fittable, name, value, _, uncert, lower, upper, units in params: 412 411 if not value: 413 value = n umpy.nan412 value = np.nan 414 413 if not uncert or uncert[1] == '' or uncert[1] == 'None': 415 414 uncert[0] = False 416 uncert[1] = n umpy.nan415 uncert[1] = np.nan 417 416 if not upper or upper[1] == '' or upper[1] == 'None': 418 417 upper[0] = False 419 upper[1] = n umpy.nan418 upper[1] = np.nan 420 419 if not lower or lower[1] == '' or lower[1] == 'None': 421 420 lower[0] = False 422 lower[1] = n umpy.nan421 lower[1] = np.nan 423 422 if is_string: 424 423 p[name] = str(value) … … 450 449 lower = params.get(name + ".lower", '-inf') 451 450 units = params.get(name + ".units") 452 if std is not None and std is not n umpy.nan:451 if std is not None and std is not np.nan: 453 452 std = [True, str(std)] 454 453 else: 455 454 std = [False, ''] 456 if lower is not None and lower is not n umpy.nan:455 if lower is not None and lower is not np.nan: 457 456 lower = [True, str(lower)] 458 457 else: 459 458 lower = [True, '-inf'] 460 if upper is not None and upper is not n umpy.nan:459 if upper is not None and upper is not np.nan: 461 460 upper = [True, str(upper)] 462 461 else: … … 562 561 rep += "dq_l : %s\n" % self.dq_l 563 562 rep += "dq_r : %s\n" % self.dq_r 564 rep += "dx_max : %s\n" % str(self.dx_max) 565 rep += "dx_min : %s\n" % str(self.dx_min) 563 rep += "dx_percent : %s\n" % str(self.dx_percent) 566 564 rep += "dxl : %s\n" % str(self.dxl) 567 565 rep += "dxw : %s\n" % str(self.dxw) … … 821 819 822 820 attr = newdoc.createAttribute("version") 823 import sasview821 from sas import sasview 824 822 attr.nodeValue = sasview.__version__ 825 823 # attr.nodeValue = '1.0' … … 1048 1046 setattr(self, item[0], parse_entry_helper(node, item)) 1049 1047 1048 dx_old_node = get_content('ns:%s' % 'dx_min', entry) 1050 1049 for item in LIST_OF_STATE_ATTRIBUTES: 1051 node = get_content('ns:%s' % item[0], entry) 1052 setattr(self, item[0], parse_entry_helper(node, item)) 1050 if item[0] == "dx_percent" and dx_old_node is not None: 1051 dxmin = ["dx_min", "dx_min", "float"] 1052 setattr(self, item[0], parse_entry_helper(dx_old_node, 1053 dxmin)) 1054 self.dx_old = True 1055 else: 1056 node = get_content('ns:%s' % item[0], entry) 1057 setattr(self, item[0], parse_entry_helper(node, item)) 1053 1058 1054 1059 for item in LIST_OF_STATE_PARAMETERS: … … 1095 1100 % (line, tagname, name)) 1096 1101 logging.error(msg + traceback.format_exc()) 1097 dic[name] = n umpy.array(value_list)1102 dic[name] = np.array(value_list) 1098 1103 setattr(self, varname, dic) 1099 1104 -
TabularUnified src/sas/sasgui/perspectives/fitting/utils.py ¶
rd85c194 r9a5097c 2 2 Module contains functions frequently used in this package 3 3 """ 4 import numpy 4 import numpy as np 5 5 6 6 … … 19 19 data = data.y 20 20 if flag == 0: 21 weight = n umpy.ones_like(data)21 weight = np.ones_like(data) 22 22 elif flag == 1: 23 23 weight = dy_data 24 24 elif flag == 2: 25 weight = n umpy.sqrt(numpy.abs(data))25 weight = np.sqrt(np.abs(data)) 26 26 elif flag == 3: 27 weight = n umpy.abs(data)27 weight = np.abs(data) 28 28 return weight -
TabularUnified src/sas/sasgui/perspectives/pr/explore_dialog.py ¶
rd85c194 r9a5097c 19 19 20 20 import wx 21 import numpy 21 import numpy as np 22 22 import logging 23 23 import sys … … 65 65 66 66 step = (self.max - self.min) / (self.npts - 1) 67 self.x = n umpy.arange(self.min, self.max + step * 0.01, step)68 dx = n umpy.zeros(len(self.x))69 y = n umpy.ones(len(self.x))70 dy = n umpy.zeros(len(self.x))67 self.x = np.arange(self.min, self.max + step * 0.01, step) 68 dx = np.zeros(len(self.x)) 69 y = np.ones(len(self.x)) 70 dy = np.zeros(len(self.x)) 71 71 72 72 # Plot area -
TabularUnified src/sas/sasgui/perspectives/pr/pr.py ¶
ra69a967 r9a5097c 21 21 import time 22 22 import math 23 import numpy 23 import numpy as np 24 24 import pylab 25 25 from sas.sasgui.guiframe.gui_manager import MDIFrame … … 207 207 r = pylab.arange(0.01, d_max, d_max / 51.0) 208 208 M = len(r) 209 y = n umpy.zeros(M)210 pr_err = n umpy.zeros(M)209 y = np.zeros(M) 210 pr_err = np.zeros(M) 211 211 212 212 total = 0.0 … … 253 253 """ 254 254 # Show P(r) 255 y_true = n umpy.zeros(len(x))255 y_true = np.zeros(len(x)) 256 256 257 257 sum_true = 0.0 … … 307 307 308 308 x = pylab.arange(minq, maxq, maxq / 301.0) 309 y = n umpy.zeros(len(x))310 err = n umpy.zeros(len(x))309 y = np.zeros(len(x)) 310 err = np.zeros(len(x)) 311 311 for i in range(len(x)): 312 312 value = pr.iq(out, x[i]) … … 337 337 if pr.slit_width > 0 or pr.slit_height > 0: 338 338 x = pylab.arange(minq, maxq, maxq / 301.0) 339 y = n umpy.zeros(len(x))340 err = n umpy.zeros(len(x))339 y = np.zeros(len(x)) 340 err = np.zeros(len(x)) 341 341 for i in range(len(x)): 342 342 value = pr.iq_smeared(out, x[i]) … … 382 382 x = pylab.arange(0.0, pr.d_max, pr.d_max / self._pr_npts) 383 383 384 y = n umpy.zeros(len(x))385 dy = n umpy.zeros(len(x))386 y_true = n umpy.zeros(len(x))384 y = np.zeros(len(x)) 385 dy = np.zeros(len(x)) 386 y_true = np.zeros(len(x)) 387 387 388 388 total = 0.0 389 389 pmax = 0.0 390 cov2 = n umpy.ascontiguousarray(cov)390 cov2 = np.ascontiguousarray(cov) 391 391 392 392 for i in range(len(x)): … … 480 480 """ 481 481 # Read the data from the data file 482 data_x = n umpy.zeros(0)483 data_y = n umpy.zeros(0)484 data_err = n umpy.zeros(0)482 data_x = np.zeros(0) 483 data_y = np.zeros(0) 484 data_err = np.zeros(0) 485 485 scale = None 486 486 min_err = 0.0 … … 504 504 #err = 0 505 505 506 data_x = n umpy.append(data_x, x)507 data_y = n umpy.append(data_y, y)508 data_err = n umpy.append(data_err, err)506 data_x = np.append(data_x, x) 507 data_y = np.append(data_y, y) 508 data_err = np.append(data_err, err) 509 509 except: 510 510 logging.error(sys.exc_value) … … 528 528 """ 529 529 # Read the data from the data file 530 data_x = n umpy.zeros(0)531 data_y = n umpy.zeros(0)532 data_err = n umpy.zeros(0)530 data_x = np.zeros(0) 531 data_y = np.zeros(0) 532 data_err = np.zeros(0) 533 533 scale = None 534 534 min_err = 0.0 … … 555 555 #err = 0 556 556 557 data_x = n umpy.append(data_x, x)558 data_y = n umpy.append(data_y, y)559 data_err = n umpy.append(data_err, err)557 data_x = np.append(data_x, x) 558 data_y = np.append(data_y, y) 559 data_err = np.append(data_err, err) 560 560 except: 561 561 logging.error(sys.exc_value) … … 640 640 # Now replot the original added data 641 641 for plot in self._added_plots: 642 self._added_plots[plot].y = n umpy.copy(self._default_Iq[plot])642 self._added_plots[plot].y = np.copy(self._default_Iq[plot]) 643 643 wx.PostEvent(self.parent, 644 644 NewPlotEvent(plot=self._added_plots[plot], … … 664 664 # Now scale the added plots too 665 665 for plot in self._added_plots: 666 total = n umpy.sum(self._added_plots[plot].y)666 total = np.sum(self._added_plots[plot].y) 667 667 npts = len(self._added_plots[plot].x) 668 668 total *= self._added_plots[plot].x[npts - 1] / npts … … 814 814 # Save Pr invertor 815 815 self.pr = pr 816 cov = n umpy.ascontiguousarray(cov)816 cov = np.ascontiguousarray(cov) 817 817 818 818 # Show result on control panel … … 982 982 all_zeros = True 983 983 if err == None: 984 err = n umpy.zeros(len(pr.y))984 err = np.zeros(len(pr.y)) 985 985 else: 986 986 for i in range(len(err)): … … 1088 1088 # If we have not errors, add statistical errors 1089 1089 if y is not None: 1090 if err == None or n umpy.all(err) == 0:1091 err = n umpy.zeros(len(y))1090 if err == None or np.all(err) == 0: 1091 err = np.zeros(len(y)) 1092 1092 scale = None 1093 1093 min_err = 0.0 -
TabularUnified src/sas/sasgui/perspectives/simulation/simulation.py ¶
rd85c194 r9a5097c 10 10 import wx 11 11 import os 12 import numpy 12 import numpy as np 13 13 import time 14 14 import logging … … 46 46 def compute(self): 47 47 x = self.x 48 output = n umpy.zeros(len(x))49 error = n umpy.zeros(len(x))48 output = np.zeros(len(x)) 49 error = np.zeros(len(x)) 50 50 51 51 self.starttime = time.time() … … 123 123 # Q-values for plotting simulated I(Q) 124 124 step = (self.q_max-self.q_min)/(self.q_npts-1) 125 self.x = n umpy.arange(self.q_min, self.q_max+step*0.01, step)125 self.x = np.arange(self.q_min, self.q_max+step*0.01, step) 126 126 127 127 # Set the list of panels that are part of the simulation perspective … … 187 187 # Q-values for plotting simulated I(Q) 188 188 step = (self.q_max-self.q_min)/(self.q_npts-1) 189 self.x = n umpy.arange(self.q_min, self.q_max+step*0.01, step)189 self.x = np.arange(self.q_min, self.q_max+step*0.01, step) 190 190 191 191 # Compute the simulated I(Q) -
TabularUnified src/sas/sasgui/plottools/PlotPanel.py ¶
r198fa76 r9a5097c 29 29 DEFAULT_CMAP = pylab.cm.jet 30 30 import copy 31 import numpy 31 import numpy as np 32 32 33 33 from sas.sasgui.guiframe.events import StatusEvent … … 1452 1452 if self.zmin_2D <= 0 and len(output[output > 0]) > 0: 1453 1453 zmin_temp = self.zmin_2D 1454 output[output > 0] = n umpy.log10(output[output > 0])1454 output[output > 0] = np.log10(output[output > 0]) 1455 1455 #In log scale Negative values are not correct in general 1456 #output[output<=0] = math.log(n umpy.min(output[output>0]))1456 #output[output<=0] = math.log(np.min(output[output>0])) 1457 1457 elif self.zmin_2D <= 0: 1458 1458 zmin_temp = self.zmin_2D 1459 output[output > 0] = n umpy.zeros(len(output))1459 output[output > 0] = np.zeros(len(output)) 1460 1460 output[output <= 0] = -32 1461 1461 else: 1462 1462 zmin_temp = self.zmin_2D 1463 output[output > 0] = n umpy.log10(output[output > 0])1463 output[output > 0] = np.log10(output[output > 0]) 1464 1464 #In log scale Negative values are not correct in general 1465 #output[output<=0] = math.log(n umpy.min(output[output>0]))1465 #output[output<=0] = math.log(np.min(output[output>0])) 1466 1466 except: 1467 1467 #Too many problems in 2D plot with scale … … 1492 1492 X = self.x_bins[0:-1] 1493 1493 Y = self.y_bins[0:-1] 1494 X, Y = n umpy.meshgrid(X, Y)1494 X, Y = np.meshgrid(X, Y) 1495 1495 1496 1496 try: … … 1555 1555 # 1d array to use for weighting the data point averaging 1556 1556 #when they fall into a same bin. 1557 weights_data = n umpy.ones([self.data.size])1557 weights_data = np.ones([self.data.size]) 1558 1558 # get histogram of ones w/len(data); this will provide 1559 1559 #the weights of data on each bins 1560 weights, xedges, yedges = n umpy.histogram2d(x=self.qy_data,1560 weights, xedges, yedges = np.histogram2d(x=self.qy_data, 1561 1561 y=self.qx_data, 1562 1562 bins=[self.y_bins, self.x_bins], 1563 1563 weights=weights_data) 1564 1564 # get histogram of data, all points into a bin in a way of summing 1565 image, xedges, yedges = n umpy.histogram2d(x=self.qy_data,1565 image, xedges, yedges = np.histogram2d(x=self.qy_data, 1566 1566 y=self.qx_data, 1567 1567 bins=[self.y_bins, self.x_bins], … … 1581 1581 # do while loop until all vacant bins are filled up up 1582 1582 #to loop = max_loop 1583 while not(n umpy.isfinite(image[weights == 0])).all():1583 while not(np.isfinite(image[weights == 0])).all(): 1584 1584 if loop >= max_loop: # this protects never-ending loop 1585 1585 break … … 1630 1630 1631 1631 # store x and y bin centers in q space 1632 x_bins = n umpy.linspace(xmin, xmax, npix_x)1633 y_bins = n umpy.linspace(ymin, ymax, npix_y)1632 x_bins = np.linspace(xmin, xmax, npix_x) 1633 y_bins = np.linspace(ymin, ymax, npix_y) 1634 1634 1635 1635 #set x_bins and y_bins … … 1650 1650 """ 1651 1651 # No image matrix given 1652 if image == None or n umpy.ndim(image) != 2 \1653 or n umpy.isfinite(image).all() \1652 if image == None or np.ndim(image) != 2 \ 1653 or np.isfinite(image).all() \ 1654 1654 or weights == None: 1655 1655 return image … … 1657 1657 len_y = len(image) 1658 1658 len_x = len(image[1]) 1659 temp_image = n umpy.zeros([len_y, len_x])1660 weit = n umpy.zeros([len_y, len_x])1659 temp_image = np.zeros([len_y, len_x]) 1660 weit = np.zeros([len_y, len_x]) 1661 1661 # do for-loop for all pixels 1662 1662 for n_y in range(len(image)): 1663 1663 for n_x in range(len(image[1])): 1664 1664 # find only null pixels 1665 if weights[n_y][n_x] > 0 or n umpy.isfinite(image[n_y][n_x]):1665 if weights[n_y][n_x] > 0 or np.isfinite(image[n_y][n_x]): 1666 1666 continue 1667 1667 else: 1668 1668 # find 4 nearest neighbors 1669 1669 # check where or not it is at the corner 1670 if n_y != 0 and n umpy.isfinite(image[n_y - 1][n_x]):1670 if n_y != 0 and np.isfinite(image[n_y - 1][n_x]): 1671 1671 temp_image[n_y][n_x] += image[n_y - 1][n_x] 1672 1672 weit[n_y][n_x] += 1 1673 if n_x != 0 and n umpy.isfinite(image[n_y][n_x - 1]):1673 if n_x != 0 and np.isfinite(image[n_y][n_x - 1]): 1674 1674 temp_image[n_y][n_x] += image[n_y][n_x - 1] 1675 1675 weit[n_y][n_x] += 1 1676 if n_y != len_y - 1 and n umpy.isfinite(image[n_y + 1][n_x]):1676 if n_y != len_y - 1 and np.isfinite(image[n_y + 1][n_x]): 1677 1677 temp_image[n_y][n_x] += image[n_y + 1][n_x] 1678 1678 weit[n_y][n_x] += 1 1679 if n_x != len_x - 1 and n umpy.isfinite(image[n_y][n_x + 1]):1679 if n_x != len_x - 1 and np.isfinite(image[n_y][n_x + 1]): 1680 1680 temp_image[n_y][n_x] += image[n_y][n_x + 1] 1681 1681 weit[n_y][n_x] += 1 1682 1682 # go 4 next nearest neighbors when no non-zero 1683 1683 # neighbor exists 1684 if n_y != 0 and n_x != 0 and \1685 numpy.isfinite(image[n_y - 1][n_x - 1]):1684 if n_y != 0 and n_x != 0 and \ 1685 np.isfinite(image[n_y - 1][n_x - 1]): 1686 1686 temp_image[n_y][n_x] += image[n_y - 1][n_x - 1] 1687 1687 weit[n_y][n_x] += 1 1688 1688 if n_y != len_y - 1 and n_x != 0 and \ 1689 numpy.isfinite(image[n_y + 1][n_x - 1]):1689 np.isfinite(image[n_y + 1][n_x - 1]): 1690 1690 temp_image[n_y][n_x] += image[n_y + 1][n_x - 1] 1691 1691 weit[n_y][n_x] += 1 1692 1692 if n_y != len_y and n_x != len_x - 1 and \ 1693 numpy.isfinite(image[n_y - 1][n_x + 1]):1693 np.isfinite(image[n_y - 1][n_x + 1]): 1694 1694 temp_image[n_y][n_x] += image[n_y - 1][n_x + 1] 1695 1695 weit[n_y][n_x] += 1 1696 1696 if n_y != len_y - 1 and n_x != len_x - 1 and \ 1697 numpy.isfinite(image[n_y + 1][n_x + 1]):1697 np.isfinite(image[n_y + 1][n_x + 1]): 1698 1698 temp_image[n_y][n_x] += image[n_y + 1][n_x + 1] 1699 1699 weit[n_y][n_x] += 1 -
TabularUnified src/sas/sasgui/plottools/fitDialog.py ¶
rdd5bf63 r9a5097c 2 2 from plottables import Theory1D 3 3 import math 4 import numpy 4 import numpy as np 5 5 import fittings 6 6 import transform … … 482 482 483 483 if self.xLabel.lower() == "log10(x)": 484 tempdy = n umpy.asarray(tempdy)484 tempdy = np.asarray(tempdy) 485 485 tempdy[tempdy == 0] = 1 486 486 chisqr, out, cov = fittings.sasfit(self.model, … … 491 491 math.log10(xmax)) 492 492 else: 493 tempdy = n umpy.asarray(tempdy)493 tempdy = np.asarray(tempdy) 494 494 tempdy[tempdy == 0] = 1 495 495 chisqr, out, cov = fittings.sasfit(self.model, … … 572 572 if self.rg_on: 573 573 if self.Rg_tctr.IsShown(): 574 rg = n umpy.sqrt(-3 * float(cstA))574 rg = np.sqrt(-3 * float(cstA)) 575 575 value = format_number(rg) 576 576 self.Rg_tctr.SetValue(value) 577 577 if self.I0_tctr.IsShown(): 578 val = n umpy.exp(cstB)578 val = np.exp(cstB) 579 579 self.I0_tctr.SetValue(format_number(val)) 580 580 if self.Rgerr_tctr.IsShown(): … … 585 585 self.Rgerr_tctr.SetValue(value) 586 586 if self.I0err_tctr.IsShown(): 587 val = n umpy.abs(numpy.exp(cstB) * errB)587 val = np.abs(np.exp(cstB) * errB) 588 588 self.I0err_tctr.SetValue(format_number(val)) 589 589 if self.Diameter_tctr.IsShown(): 590 rg = n umpy.sqrt(-2 * float(cstA))591 _diam = 4 * n umpy.sqrt(-float(cstA))590 rg = np.sqrt(-2 * float(cstA)) 591 _diam = 4 * np.sqrt(-float(cstA)) 592 592 value = format_number(_diam) 593 593 self.Diameter_tctr.SetValue(value) -
TabularUnified src/sas/sasgui/plottools/plottables.py ¶
ra9f579c r9a5097c 43 43 # Support for ancient python versions 44 44 import copy 45 import numpy 45 import numpy as np 46 46 import sys 47 47 import logging … … 706 706 self.dy = None 707 707 if not has_err_x: 708 dx = n umpy.zeros(len(x))708 dx = np.zeros(len(x)) 709 709 if not has_err_y: 710 dy = n umpy.zeros(len(y))710 dy = np.zeros(len(y)) 711 711 for i in range(len(x)): 712 712 try: … … 796 796 tempdy = [] 797 797 if self.dx == None: 798 self.dx = n umpy.zeros(len(self.x))798 self.dx = np.zeros(len(self.x)) 799 799 if self.dy == None: 800 self.dy = n umpy.zeros(len(self.y))800 self.dy = np.zeros(len(self.y)) 801 801 if self.xLabel == "log10(x)": 802 802 for i in range(len(self.x)): … … 826 826 tempdy = [] 827 827 if self.dx == None: 828 self.dx = n umpy.zeros(len(self.x))828 self.dx = np.zeros(len(self.x)) 829 829 if self.dy == None: 830 self.dy = n umpy.zeros(len(self.y))830 self.dy = np.zeros(len(self.y)) 831 831 if self.yLabel == "log10(y)": 832 832 for i in range(len(self.x)): … … 859 859 tempdy = [] 860 860 if self.dx == None: 861 self.dx = n umpy.zeros(len(self.x))861 self.dx = np.zeros(len(self.x)) 862 862 if self.dy == None: 863 self.dy = n umpy.zeros(len(self.y))863 self.dy = np.zeros(len(self.y)) 864 864 if xmin != None and xmax != None: 865 865 for i in range(len(self.x)): … … 1228 1228 1229 1229 def sample_graph(): 1230 import numpy as n x1230 import numpy as np 1231 1231 1232 1232 # Construct a simple graph 1233 1233 if False: 1234 x = n x.array([1, 2, 3, 4, 5, 6], 'd')1235 y = n x.array([4, 5, 6, 5, 4, 5], 'd')1236 dy = n x.array([0.2, 0.3, 0.1, 0.2, 0.9, 0.3])1234 x = np.array([1, 2, 3, 4, 5, 6], 'd') 1235 y = np.array([4, 5, 6, 5, 4, 5], 'd') 1236 dy = np.array([0.2, 0.3, 0.1, 0.2, 0.9, 0.3]) 1237 1237 else: 1238 x = n x.linspace(0, 1., 10000)1239 y = n x.sin(2 * nx.pi * x * 2.8)1240 dy = n x.sqrt(100 * nx.abs(y)) / 1001238 x = np.linspace(0, 1., 10000) 1239 y = np.sin(2 * np.pi * x * 2.8) 1240 dy = np.sqrt(100 * np.abs(y)) / 100 1241 1241 data = Data1D(x, y, dy=dy) 1242 1242 data.xaxis('distance', 'm') -
TabularUnified test/corfunc/test/utest_corfunc.py ¶
racefa2b r253eb6c6 8 8 from sas.sascalc.corfunc.corfunc_calculator import CorfuncCalculator 9 9 from sas.sascalc.dataloader.data_info import Data1D 10 import matplotlib.pyplot as plt 10 11 11 12 12 class TestCalculator(unittest.TestCase): … … 69 69 self.assertLess(abs(params['max']-75), 2.5) # L_p ~= 75 70 70 71 72 71 # Ensure tests are ran in correct order; 73 72 # Each test depends on the one before it -
TabularUnified test/pr_inversion/test/utest_invertor.py ¶
rb699768 r9a5097c 568 568 569 569 def load(path = "sphere_60_q0_2.txt"): 570 import numpy, math, sys 570 import numpy as np 571 import math 572 import sys 571 573 # Read the data from the data file 572 data_x = n umpy.zeros(0)573 data_y = n umpy.zeros(0)574 data_err = n umpy.zeros(0)574 data_x = np.zeros(0) 575 data_y = np.zeros(0) 576 data_err = np.zeros(0) 575 577 scale = None 576 578 if not path == None: … … 589 591 scale = 0.15*math.sqrt(y) 590 592 err = scale*math.sqrt(y) 591 data_x = n umpy.append(data_x, x)592 data_y = n umpy.append(data_y, y)593 data_err = n umpy.append(data_err, err)593 data_x = np.append(data_x, x) 594 data_y = np.append(data_y, y) 595 data_err = np.append(data_err, err) 594 596 except: 595 597 pass -
TabularUnified test/sascalculator/test/utest_sas_gen.py ¶
ref908db r9a5097c 8 8 from sas.sascalc.calculator import sas_gen 9 9 10 import numpy11 12 import os.path13 10 14 11 class sas_gen_test(unittest.TestCase): … … 51 48 self.assertEqual(output.pos_z[0], 0.0) 52 49 50 53 51 if __name__ == '__main__': 54 52 unittest.main() 55 53 -
TabularUnified test/sasdataloader/plugins/test_reader.py ¶
rb699768 r9a5097c 8 8 copyright 2008, University of Tennessee 9 9 """ 10 import numpy, os 10 import os 11 import numpy as np 11 12 from sas.sascalc.dataloader.data_info import Data1D 12 13 … … 40 41 buff = input_f.read() 41 42 lines = buff.split('\n') 42 x = n umpy.zeros(0)43 y = n umpy.zeros(0)44 dy = n umpy.zeros(0)43 x = np.zeros(0) 44 y = np.zeros(0) 45 dy = np.zeros(0) 45 46 output = Data1D(x, y, dy=dy) 46 47 self.filename = output.filename = basename 47 48 48 49 for line in lines: 49 x = n umpy.append(x, float(line))50 x = np.append(x, float(line)) 50 51 51 52 output.x = x -
TabularUnified test/sasdataloader/test/utest_abs_reader.py ¶
r5f26aa4 rdd11014 4 4 5 5 import unittest 6 import numpy, math 7 from sas.sascalc.dataloader.loader import Loader 6 import math 7 import numpy as np 8 from sas.sascalc.dataloader.loader import Loader 9 from sas.sascalc.dataloader.readers.IgorReader import Reader as IgorReader 8 10 from sas.sascalc.dataloader.data_info import Data1D 9 11 … … 86 88 87 89 def setUp(self): 88 self.data = Loader().load("MAR07232_rest.ASC") 89 90 # the IgorReader should be able to read this filetype 91 # if it can't, stop here. 92 reader = IgorReader() 93 self.data = reader.read("MAR07232_rest.ASC") 94 90 95 def test_igor_checkdata(self): 91 96 """ … … 108 113 109 114 self.assertEqual(self.data.detector[0].beam_center_unit, 'mm') 110 center_x = (68.76 -1)*5.0111 center_y = (62.47 -1)*5.0115 center_x = (68.76 - 1)*5.0 116 center_y = (62.47 - 1)*5.0 112 117 self.assertEqual(self.data.detector[0].beam_center.x, center_x) 113 118 self.assertEqual(self.data.detector[0].beam_center.y, center_y) 114 119 115 120 self.assertEqual(self.data.I_unit, '1/cm') 116 self.assertEqual(self.data.data[0], 0.279783) 117 self.assertEqual(self.data.data[1], 0.28951) 118 self.assertEqual(self.data.data[2], 0.167634) 119 121 # 3 points should be suffcient to check that the data is in column 122 # major order. 123 np.testing.assert_almost_equal(self.data.data[0:3], 124 [0.279783, 0.28951, 0.167634]) 125 np.testing.assert_almost_equal(self.data.qx_data[0:3], 126 [-0.01849072, -0.01821785, -0.01794498]) 127 np.testing.assert_almost_equal(self.data.qy_data[0:3], 128 [-0.01677435, -0.01677435, -0.01677435]) 129 130 def test_generic_loader(self): 131 # the generic loader should direct the file to IgorReader as well 132 data = Loader().load("MAR07232_rest.ASC") 133 self.assertEqual(data.meta_data['loader'], "IGOR 2D") 134 135 120 136 class danse_reader(unittest.TestCase): 121 137 … … 313 329 from sas.sascalc.dataloader.readers.cansas_reader import Reader 314 330 r = Reader() 315 x = n umpy.ones(5)316 y = n umpy.ones(5)317 dy = n umpy.ones(5)331 x = np.ones(5) 332 y = np.ones(5) 333 dy = np.ones(5) 318 334 319 335 filename = "write_test.xml" -
TabularUnified test/sasdataloader/test/utest_averaging.py ¶
rb699768 r9a5097c 1 1 2 2 import unittest 3 import math 3 4 4 5 from sas.sascalc.dataloader.loader import Loader 5 6 from sas.sascalc.dataloader.manipulations import Ring, CircularAverage, SectorPhi, get_q,reader2D_converter 6 7 import os.path 8 import numpy, math 7 8 import numpy as np 9 9 import sas.sascalc.dataloader.data_info as data_info 10 10 … … 18 18 should return the predefined height of the distribution (1.0). 19 19 """ 20 x_0 = n umpy.ones([100,100])21 dx_0 = n umpy.ones([100,100])20 x_0 = np.ones([100,100]) 21 dx_0 = np.ones([100,100]) 22 22 23 23 self.data = data_info.Data2D(data=x_0, err_data=dx_0) … … 42 42 43 43 self.qstep = len(x_0) 44 x= n umpy.linspace(start= -1*self.qmax,44 x= np.linspace(start= -1*self.qmax, 45 45 stop= self.qmax, 46 46 num= self.qstep, 47 47 endpoint=True ) 48 y = n umpy.linspace(start= -1*self.qmax,48 y = np.linspace(start= -1*self.qmax, 49 49 stop= self.qmax, 50 50 num= self.qstep, -
TabularUnified test/sasdataloader/test/utest_cansas.py ¶
r1686a333 r9a5097c 15 15 import pylint as pylint 16 16 import unittest 17 import numpy 17 import numpy as np 18 18 import logging 19 19 import warnings -
TabularUnified test/sasguiframe/test/utest_manipulations.py ¶
rd85c194 r9a5097c 5 5 6 6 import unittest 7 import numpy, math 7 import math 8 import numpy as np 8 9 from sas.sascalc.dataloader.loader import Loader 9 10 from sas.sasgui.guiframe.dataFitting import Data1D, Data2D … … 52 53 def setUp(self): 53 54 # Create two data sets to play with 54 x_0 = n umpy.ones(5)55 x_0 = np.ones(5) 55 56 for i in range(5): 56 57 x_0[i] = x_0[i]*(i+1.0) 57 58 58 y_0 = 2.0*n umpy.ones(5)59 dy_0 = 0.5*n umpy.ones(5)59 y_0 = 2.0*np.ones(5) 60 dy_0 = 0.5*np.ones(5) 60 61 self.data = Data1D(x_0, y_0, dy=dy_0) 61 62 62 63 x = self.data.x 63 y = n umpy.ones(5)64 dy = n umpy.ones(5)64 y = np.ones(5) 65 dy = np.ones(5) 65 66 self.data2 = Data1D(x, y, dy=dy) 66 67 … … 155 156 def setUp(self): 156 157 # Create two data sets to play with 157 x_0 = 2.0*n umpy.ones(25)158 dx_0 = 0.5*n umpy.ones(25)159 qx_0 = n umpy.arange(25)160 qy_0 = n umpy.arange(25)161 mask_0 = n umpy.zeros(25)162 dqx_0 = n umpy.arange(25)/100163 dqy_0 = n umpy.arange(25)/100164 q_0 = n umpy.sqrt(qx_0 * qx_0 + qy_0 * qy_0)158 x_0 = 2.0*np.ones(25) 159 dx_0 = 0.5*np.ones(25) 160 qx_0 = np.arange(25) 161 qy_0 = np.arange(25) 162 mask_0 = np.zeros(25) 163 dqx_0 = np.arange(25)/100 164 dqy_0 = np.arange(25)/100 165 q_0 = np.sqrt(qx_0 * qx_0 + qy_0 * qy_0) 165 166 self.data = Data2D(image=x_0, err_image=dx_0, qx_data=qx_0, 166 167 qy_data=qy_0, q_data=q_0, mask=mask_0, 167 168 dqx_data=dqx_0, dqy_data=dqy_0) 168 169 169 y = n umpy.ones(25)170 dy = n umpy.ones(25)171 qx = n umpy.arange(25)172 qy = n umpy.arange(25)173 mask = n umpy.zeros(25)174 q = n umpy.sqrt(qx * qx + qy * qy)170 y = np.ones(25) 171 dy = np.ones(25) 172 qx = np.arange(25) 173 qy = np.arange(25) 174 mask = np.zeros(25) 175 q = np.sqrt(qx * qx + qy * qy) 175 176 self.data2 = Data2D(image=y, err_image=dy, qx_data=qx, qy_data=qy, 176 177 q_data=q, mask=mask) … … 182 183 """ 183 184 # There should be 5 entries in the file 184 self.assertEqual(n umpy.size(self.data.data), 25)185 self.assertEqual(np.size(self.data.data), 25) 185 186 186 187 for i in range(25): … … 263 264 def setUp(self): 264 265 # Create two data sets to play with 265 x_0 = 2.0*n umpy.ones(25)266 dx_0 = 0.5*n umpy.ones(25)267 qx_0 = n umpy.arange(25)268 qy_0 = n umpy.arange(25)269 mask_0 = n umpy.zeros(25)270 dqx_0 = n umpy.arange(25)/100271 dqy_0 = n umpy.arange(25)/100272 q_0 = n umpy.sqrt(qx_0 * qx_0 + qy_0 * qy_0)266 x_0 = 2.0*np.ones(25) 267 dx_0 = 0.5*np.ones(25) 268 qx_0 = np.arange(25) 269 qy_0 = np.arange(25) 270 mask_0 = np.zeros(25) 271 dqx_0 = np.arange(25)/100 272 dqy_0 = np.arange(25)/100 273 q_0 = np.sqrt(qx_0 * qx_0 + qy_0 * qy_0) 273 274 self.data = Data2D(image=x_0, err_image=dx_0, qx_data=qx_0, 274 275 qy_data=qy_0, q_data=q_0, mask=mask_0, 275 276 dqx_data=dqx_0, dqy_data=dqy_0) 276 277 277 y = n umpy.ones(25)278 dy = n umpy.ones(25)279 qx = n umpy.arange(25)280 qy = n umpy.arange(25)281 mask = n umpy.zeros(25)282 q = n umpy.sqrt(qx * qx + qy * qy)278 y = np.ones(25) 279 dy = np.ones(25) 280 qx = np.arange(25) 281 qy = np.arange(25) 282 mask = np.zeros(25) 283 q = np.sqrt(qx * qx + qy * qy) 283 284 self.data2 = Data2D(image=y, err_image=dy, qx_data=qx, qy_data=qy, 284 285 q_data=q, mask=mask) … … 290 291 """ 291 292 # There should be 5 entries in the file 292 self.assertEqual(n umpy.size(self.data.data), 25)293 self.assertEqual(np.size(self.data.data), 25) 293 294 294 295 for i in range(25): -
TabularUnified test/sasinvariant/test/utest_data_handling.py ¶
rb699768 r9a5097c 9 9 """ 10 10 import unittest 11 import numpy, math 11 import math 12 import numpy as np 12 13 from sas.sascalc.dataloader.loader import Loader 13 14 from sas.sascalc.dataloader.data_info import Data1D … … 20 21 """ 21 22 def setUp(self): 22 x = n umpy.asarray([1.,2.,3.,4.,5.,6.,7.,8.,9.])23 y = n umpy.asarray([1.,2.,3.,4.,5.,6.,7.,8.,9.])23 x = np.asarray([1.,2.,3.,4.,5.,6.,7.,8.,9.]) 24 y = np.asarray([1.,2.,3.,4.,5.,6.,7.,8.,9.]) 24 25 dy = y/10.0 25 26 … … 135 136 136 137 def test_error_treatment(self): 137 x = n umpy.asarray(numpy.asarray([0,1,2,3]))138 y = n umpy.asarray(numpy.asarray([1,1,1,1]))138 x = np.asarray(np.asarray([0,1,2,3])) 139 y = np.asarray(np.asarray([1,1,1,1])) 139 140 140 141 # These are all the values of the dy array that would cause … … 340 341 self.scale = 1.5 341 342 self.rg = 30.0 342 x = n umpy.arange(0.0001, 0.1, 0.0001)343 y = n umpy.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x])343 x = np.arange(0.0001, 0.1, 0.0001) 344 y = np.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x]) 344 345 dy = y*.1 345 346 self.data = Data1D(x=x, y=y, dy=dy) … … 383 384 self.scale = 1.5 384 385 self.m = 3.0 385 x = n umpy.arange(0.0001, 0.1, 0.0001)386 y = n umpy.asarray([self.scale * math.pow(q ,-1.0*self.m) for q in x])386 x = np.arange(0.0001, 0.1, 0.0001) 387 y = np.asarray([self.scale * math.pow(q ,-1.0*self.m) for q in x]) 387 388 dy = y*.1 388 389 self.data = Data1D(x=x, y=y, dy=dy) … … 427 428 that can't be transformed 428 429 """ 429 x = n umpy.asarray(numpy.asarray([0,1,2,3]))430 y = n umpy.asarray(numpy.asarray([1,1,1,1]))430 x = np.asarray(np.asarray([0,1,2,3])) 431 y = np.asarray(np.asarray([1,1,1,1])) 431 432 g = invariant.Guinier() 432 433 data_in = Data1D(x=x, y=y) … … 438 439 439 440 def test_allowed_bins(self): 440 x = n umpy.asarray(numpy.asarray([0,1,2,3]))441 y = n umpy.asarray(numpy.asarray([1,1,1,1]))442 dy = n umpy.asarray(numpy.asarray([1,1,1,1]))441 x = np.asarray(np.asarray([0,1,2,3])) 442 y = np.asarray(np.asarray([1,1,1,1])) 443 dy = np.asarray(np.asarray([1,1,1,1])) 443 444 g = invariant.Guinier() 444 445 data = Data1D(x=x, y=y, dy=dy) … … 465 466 self.scale = 1.5 466 467 self.rg = 30.0 467 x = n umpy.arange(0.0001, 0.1, 0.0001)468 y = n umpy.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x])468 x = np.arange(0.0001, 0.1, 0.0001) 469 y = np.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x]) 469 470 dy = y*.1 470 471 self.data = Data1D(x=x, y=y, dy=dy) … … 513 514 self.scale = 1.5 514 515 self.rg = 30.0 515 x = n umpy.arange(0.0001, 0.1, 0.0001)516 y = n umpy.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x])516 x = np.arange(0.0001, 0.1, 0.0001) 517 y = np.asarray([self.scale * math.exp( -(q*self.rg)**2 / 3.0 ) for q in x]) 517 518 dy = y*.1 518 519 self.data = Data1D(x=x, y=y, dy=dy) … … 600 601 self.scale = 1.5 601 602 self.m = 3.0 602 x = n umpy.arange(0.0001, 0.1, 0.0001)603 y = n umpy.asarray([self.scale * math.pow(q ,-1.0*self.m) for q in x])603 x = np.arange(0.0001, 0.1, 0.0001) 604 y = np.asarray([self.scale * math.pow(q ,-1.0*self.m) for q in x]) 604 605 dy = y*.1 605 606 self.data = Data1D(x=x, y=y, dy=dy) -
TabularUnified test/sasinvariant/test/utest_use_cases.py ¶
rb699768 r9a5097c 5 5 #TODO: there's no test for smeared extrapolation 6 6 import unittest 7 import numpy8 7 from sas.sascalc.dataloader.loader import Loader 9 8 -
TabularUnified src/sas/sascalc/data_util/registry.py ¶
rb699768 r270c882b 7 7 """ 8 8 9 import os.path 9 from sas.sascalc.dataloader.loader_exceptions import NoKnownLoaderException 10 10 11 11 12 class ExtensionRegistry(object): … … 61 62 def __init__(self, **kw): 62 63 self.loaders = {} 64 63 65 def __setitem__(self, ext, loader): 64 66 if ext not in self.loaders: 65 67 self.loaders[ext] = [] 66 68 self.loaders[ext].insert(0,loader) 69 67 70 def __getitem__(self, ext): 68 71 return self.loaders[ext] 72 69 73 def __contains__(self, ext): 70 74 return ext in self.loaders 75 71 76 def formats(self): 72 77 """ … … 76 81 names.sort() 77 82 return names 83 78 84 def extensions(self): 79 85 """ … … 83 89 exts.sort() 84 90 return exts 91 85 92 def lookup(self, path): 86 93 """ … … 105 112 # Raise an error if there are no matching extensions 106 113 if len(loaders) == 0: 107 raise ValueError , "Unknown file type for "+path114 raise ValueError("Unknown file type for "+path) 108 115 # All done 109 116 return loaders 117 110 118 def load(self, path, format=None): 111 119 """ … … 117 125 """ 118 126 if format is None: 119 loaders = self.lookup(path) 127 try: 128 loaders = self.lookup(path) 129 except ValueError as e: 130 pass 120 131 else: 121 loaders = self.loaders[format] 132 try: 133 loaders = self.loaders[format] 134 except KeyError as e: 135 pass 122 136 for fn in loaders: 123 137 try: 124 138 return fn(path) 125 except :126 pass # give other loaders a chance to succeed139 except Exception as e: 140 pass # give other loaders a chance to succeed 127 141 # If we get here it is because all loaders failed 128 raise # reraises lastexception142 raise NoKnownLoaderException(e.message) # raise generic exception 129 143 144 145 # TODO: Move this to the unit test folder 130 146 def test(): 131 147 reg = ExtensionRegistry() … … 163 179 try: reg.load('hello.missing') 164 180 except ValueError,msg: 165 assert str(msg)=="Unknown file type for hello.missing",'Message: <%s>'%(msg) 181 assert str(msg)=="Unknown file type for hello.missing",\ 182 'Message: <%s>'%(msg) 166 183 else: raise AssertError,"No error raised for missing extension" 167 184 assert reg.formats() == ['new_cx'] -
TabularUnified src/sas/sascalc/dataloader/loader.py ¶
rb699768 r270c882b 1 1 """ 2 2 File handler to support different file extensions. 3 Uses reflectomet ry'sregistry utility.3 Uses reflectometer registry utility. 4 4 5 5 The default readers are found in the 'readers' sub-module … … 29 29 # Default readers are defined in the readers sub-module 30 30 import readers 31 from loader_exceptions import NoKnownLoaderException, FileContentsException 31 32 from readers import ascii_reader 32 33 from readers import cansas_reader 34 from readers import cansas_reader_HDF5 35 33 36 34 37 class Registry(ExtensionRegistry): … … 37 40 Readers and writers are supported. 38 41 """ 39 40 42 def __init__(self): 41 43 super(Registry, self).__init__() 42 44 43 # #Writers45 # Writers 44 46 self.writers = {} 45 47 46 # #List of wildcards48 # List of wildcards 47 49 self.wildcards = ['All (*.*)|*.*'] 48 50 49 # #Creation time, for testing51 # Creation time, for testing 50 52 self._created = time.time() 51 53 … … 61 63 of a particular reader 62 64 63 Defaults to the ascii (multi-column) reader 64 if no reader was registered for the file's 65 extension. 65 Defaults to the ascii (multi-column), cansas XML, and cansas NeXuS 66 readers if no reader was registered for the file's extension. 66 67 """ 67 68 try: 68 69 return super(Registry, self).load(path, format=format) 69 except: 70 try: 71 # No reader was found. Default to the ascii reader. 72 ascii_loader = ascii_reader.Reader() 73 return ascii_loader.read(path) 74 except: 75 cansas_loader = cansas_reader.Reader() 76 return cansas_loader.read(path) 70 except NoKnownLoaderException as e: 71 pass # try the ASCII reader 72 except FileContentsException as e: 73 pass 74 try: 75 ascii_loader = ascii_reader.Reader() 76 return ascii_loader.read(path) 77 except FileContentsException: 78 pass # try the cansas XML reader 79 try: 80 cansas_loader = cansas_reader.Reader() 81 return cansas_loader.read(path) 82 except FileContentsException: 83 pass # try the cansas NeXuS reader 84 try: 85 cansas_nexus_loader = cansas_reader_HDF5.Reader() 86 return cansas_nexus_loader.read(path) 87 except FileContentsException: 88 # No known reader available. Give up and throw an error 89 msg = "\n\tUnknown data format: %s.\n\tThe file is not a " % path 90 msg += "known format that can be loaded by SasView." 91 raise NoKnownLoaderException(msg) 77 92 78 93 def find_plugins(self, dir):
Note: See TracChangeset
for help on using the changeset viewer.