Changeset 41917b8 in sasmodels
- Timestamp:
- Nov 27, 2017 3:04:54 PM (7 years ago)
- Parents:
- fa70e04 (diff), 1b5e020 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent. - git-author:
- Jose Borreguero <borreguero@…> (11/27/17 15:04:54)
- git-committer:
- GitHub <noreply@…> (11/27/17 15:04:54)
- Files:
-
- 7 added
- 5 deleted
- 107 edited
Legend:
- Unmodified
- Added
- Removed
-
.travis.yml
rb419c2d rce8c388  1 1 language: python 2   3  sudo: false 4    2 sudo: false 5 3 matrix: 6 4 include: 7  - os: linux 8  env: 9  - PY=2.7 10  - os: linux 11  env: 12  - PY=3.6 13  - os: osx 14  language: generic 15  env: 16  - PY=2.7 17  - os: osx 18  language: generic 19  env: 20  - PY=3.5 21   22  # whitelist  5 - os: linux  6 env:  7 - PY=2.7  8 - os: linux  9 env:  10 - PY=3.6  11 - os: osx  12 language: generic  13 env:  14 - PY=2.7  15 - os: osx  16 language: generic  17 env:  18 - PY=3.5 23 19 branches: 24 20 only: 25  - master 26    21 - master 27 22 addons: 28 23 apt: 29  packages: 30  opencl-headers 31    24 packages: opencl-headers 32 25 before_install: 33  - echo $TRAVIS_OS_NAME 34   35  - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then 36  wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; 37  fi; 38  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then 39  wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh; 40  fi; 41   42  - bash miniconda.sh -b -p $HOME/miniconda 43  - export PATH="$HOME/miniconda/bin:$PATH" 44  - hash -r 45  - conda update --yes conda 46   47  # Useful for debugging any issues with conda 48  - conda info -a 49   50  - conda install --yes python=$PY numpy scipy cython mako cffi 51   52  # Not testing with opencl below, so don't need to install it 53  #- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then 54  # pip install pyopencl; 55  # fi; 56    26 - if [[ $encrypted_cb04388797b6_iv ]]; then openssl aes-256-cbc -K $encrypted_cb04388797b6_key -iv $encrypted_cb04388797b6_iv  27 -in .travis/travis_rsa.enc -out .travis/travis_rsa -d; fi;  28 - echo $TRAVIS_OS_NAME  29 - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh  30 -O miniconda.sh; fi;  31 - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh  32 -O miniconda.sh; fi;  33 - bash miniconda.sh -b -p $HOME/miniconda  34 - export PATH="$HOME/miniconda/bin:$PATH"  35 - hash -r  36 - conda update --yes conda  37 - conda info -a  38 - conda install --yes python=$PY numpy scipy cython mako cffi 57 39 install: 58  - pip install bumps 59  - pip install unittest-xml-reporting 60    40 - pip install bumps  41 - pip install unittest-xml-reporting 61 42 script: 62 43 - python --version 63 44 - python -m sasmodels.model_test -v dll all 64    45 before_deploy:  46 - echo -e "Host danse.chem.utk.edu\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config  47 deploy:  48 skip_cleanup: true  49 provider: script  50 script: "/bin/sh -ex ./deploy.sh"  51 on:  52 branch: master 65 53 notifications: 66 54 slack: -
doc/conf.py
r8ae8532 r30b60d2  211 211 #latex_preamble = '' 212 212 LATEX_PREAMBLE=r"""  213 \newcommand{\lt}{<}  214 \newcommand{\gt}{>} 213 215 \renewcommand{\AA}{\text{\r{A}}} % Allow \AA in math mode 214 216 \usepackage[utf8]{inputenc} % Allow unicode symbols in text -
doc/genmodel.py
r745b7bb rb866abf  3 3 import sys, os, math, re 4 4 import numpy as np  5 import matplotlib  6 matplotlib.use('Agg') 5 7 import matplotlib.pyplot as plt 6 8 sys.path.insert(0, os.path.abspath('..')) -
doc/guide/index.rst
r2e66ef5 rc0d7ab3  9 9 intro.rst 10 10 install.rst  11 gpu_setup.rst 11 12 pd/polydispersity.rst 12 13 resolution.rst -
doc/guide/install.rst
rf8a2baa rc0d7ab3  53 53 This will allow you to edit the files in the package and have the changes 54 54 show up immediately in python the next time you load your program. 55  Â56  OpenCL InstallationÂ57  *******************Â58  *Warning! GPU devices do not in general offer the same level of memoryÂ59  protection as CPU devices. If your code attempts to write outside allocatedÂ60  memory buffers unpredicatable behaviour may result (eg, your video displayÂ61  may freeze, or your system may crash, etc). Do not install OpenCL driversÂ62  without first checking for known issues (eg, some computer manufacturersÂ63  install modified graphics drivers so replacing these may not be a goodÂ64  idea!). If in doubt, seek advice from an IT professional before proceedingÂ65  further.*Â66  Â67  Check if you have OpenCL already installedÂ68  ==========================================Â69  Â70  **Windows**Â71  Â72  The following instructions are based onÂ73  http://web.engr.oregonstate.edu/~mjb/cs475/DoIHaveOpenCL.pdfÂ74  Â75  * Go to: Start -> Control Panel -> System & Security -> Administrative ToolsÂ76  * Double Click on Computer ManagmentÂ77  * Click on Device ManagerÂ78  * Click open Display AdaptersÂ79  * Right-click on available adapter and select PropertiesÂ80  * Click on DriverÂ81  * Go to Driver DetailsÂ82  * Scroll down and see if OpenCL is installed (look for OpenCL*.dll files)Â83  Â84  **Mac OSX**Â85  Â86  For OS X operating systems higher than 10.6 OpenCL is shipped along withÂ87  the system.Â88  Â89  However, OpenCL has had a rocky history on Macs. Apple provide a usefulÂ90  compatibility table at https://support.apple.com/en-us/HT202823Â91  Â92  Â93  InstallationÂ94  ============Â95  Â96  **Windows**Â97  Â98  Depending on the graphic card in your system, driversÂ99  can be obtained from different sources:Â100  Â101  * NVIDIA: https://developer.nvidia.com/openclÂ102  * AMD: http://developer.amd.com/tools-and-sdks/opencl-zone/Â103  Â104  Â105  **Mac OSX**Â106  Â107  N/AÂ108  Â109  You cannot download OpenCL driver updates for your Mac. They are packagedÂ110  with the normal quarterly OS X updates from Apple.Â111  Â112  Â113  .. note::Â114  Intel provides OpenCL drivers for Intel processors atÂ115  https://software.intel.com/en-us/articles/opencl-driversÂ116  These can sometimes make use of special vector instructions across multipleÂ117  processors, so it is worth installing if the GPU does not support doubleÂ118  precision. You can install this driver alongside the GPU driver for NVIDIAÂ119  or AMD.Â120  Â121  Â122  GPU SelectionÂ123  *************Â124  Â125  sasmodels evaluations can run on your graphics card (GPU) or they can runÂ126  on the processor (CPU). In general, calculations performed on the GPU will run faster.Â127  Â128  To run on the GPU, your computer must have OpenCL drivers installed.Â129  For information about OpenCL installation see thisÂ130  :ref:`opencl-installation` guidance.Â131  Â132  Where the model is evaluated is a little bit complicated.Â133  If the model has the line *single=False* then it requires double precision.Â134  If the GPU is single precision only, then it will try running via OpenCLÂ135  on the CPU. If the OpenCL driver is not available for the CPU thenÂ136  it will run as a normal program on the CPU.Â137  Â138  For models with a large number of parameters or with a lot of code,Â139  the GPU may be too small to run the program effectively.Â140  In this case, you should try simplifying the model, maybe breaking itÂ141  into several different modules so that you don't need *IF* statements in your code.Â142  If it is still too big, you can set *opencl=False* in the model file andÂ143  the model will only run as a normal program on the CPU.Â144  This will not usually be necessary.Â145  Â146  Device SelectionÂ147  ================Â148  If you have multiple GPU devices you can tell SasView which device to use.Â149  By default, SasView looks for one GPU and one CPU deviceÂ150  from available OpenCL platforms.Â151  Â152  SasView prefers AMD or NVIDIA drivers for GPU, and prefers Intel orÂ153  Apple drivers for CPU. Both GPU and CPU are included on the assumption that CPUÂ154  is always available and supports double precision.Â155  Â156  The device order is important: GPU is checked before CPU on the assumption thatÂ157  it will be faster. By examining ~/sasview.log you can see which device SasViewÂ158  chose to run the model.Â159  Â160  **If you don't want to use OpenCL, you can set** *SAS_OPENCL=None*Â161  **in your environment settings, and it will only use normal programs.**Â162  Â163  If you want to use one of the other devices, you can run the followingÂ164  from the python console in SasView::Â165  Â166  import pyopencl as clÂ167  cl.create_some_context()Â168  Â169  This will provide a menu of different OpenCL drivers available.Â170  When one is selected, it will say "set PYOPENCL_CTX=..."Â171  Use that value as the value of *SAS_OPENCL*.Â172  Â173  Compiler SelectionÂ174  ==================Â175  For models run as normal programs, you may need to specify a compiler.Â176  This is done using the SAS_COMPILER environment variable.Â177  Set it to *tinycc* for the tinycc compiler, *msvc* for theÂ178  Microsoft Visual C compiler, or *mingw* for the MinGW compiler.Â179  TinyCC is provided with SasView so that is the default.Â180  If you want one of the other compilers, be sure to have it availableÂ181  in your *PATH* so SasView can find it!Â182  Â183  Â184  *Document History*Â185  Â186  | 2017-05-17 Paul Kienzle -
doc/guide/magnetism/magnetism.rst
r990d8df r1f058ea  16 16  17 17 .. figure:: 18  mag_img/mag_vector. bmp 18 mag_img/mag_vector.png 19 19  20 20 The magnetic scattering length density is then … …  36 36  37 37 .. figure:: 38  mag_img/M_angles_pic. bmp 38 mag_img/M_angles_pic.png 39 39  40 40 If the angles of the $Q$ vector and the spin-axis $x'$ to the $x$ - axis are … …  77 77 =========== ================================================================ 78 78 M0_sld = $D_M M_0$ 79  Up_theta = $\theta_ {up}$ 79 Up_theta = $\theta_\mathrm{up}$ 80 80 M_theta = $\theta_M$ 81 81 M_phi = $\phi_M$ … …  87 87 The values of the 'Up_frac_i' and 'Up_frac_f' must be in the range 0 to 1. 88 88  89  .. note:: 90  This help document was last changed by Steve King, 02May2015  89 *Document History* 91 90  92  * Document History * 93    91 | 2015-05-02 Steve King 94 92 | 2017-05-08 Paul Kienzle -
doc/guide/pd/polydispersity.rst
rf8a2baa r1f058ea  95 95 \exp\left(-\frac{(x - \bar x)^2}{2\sigma^2}\right) 96 96  97  where $\bar x$ is the mean of the distribution and *Norm* is a normalization factorÂ98  which is determined during the numerical calculation. 97 where $\bar x$ is the mean of the distribution and *Norm* is a normalization  98 factor which is determined during the numerical calculation. 99 99  100 100 The polydispersity is … …  122 122 during the numerical calculation. 123 123  124  The median value for the distribution will be the value given for the respectiveÂ125  size parameter, for example, *radius=60*. 124 The median value for the distribution will be the value given for the  125 respective size parameter, for example, *radius=60*. 126 126  127 127 The polydispersity is given by $\sigma$ … …  208 208  209 209 Many commercial Dynamic Light Scattering (DLS) instruments produce a size 210  polydispersity parameter, sometimes even given the symbol $p$ This 210 polydispersity parameter, sometimes even given the symbol $p$\ ! This 211 211 parameter is defined as the relative standard deviation coefficient of 212 212 variation of the size distribution and is NOT the same as the polydispersity -
doc/guide/plugin.rst
r870a2f4 r3048ec6  78 78 at the start of the model documentation and as a tooltip in the SasView GUI 79 79  80  - a **short d iscription**: 80 - a **short description**: 81 81 - this is the **description** string in the *.py* file 82 82 - this is a medium length description which appears when you click … …  117 117 Models that do not conform to these requirements will *never* be incorporated 118 118 into the built-in library. 119  Â120  More complete documentation for the sasmodels package can be found atÂ121  `<http://www.sasview.org/sasmodels>`_. In particular,Â122  `<http://www.sasview.org/sasmodels/api/generate.html#module-sasmodels.generate>`_Â123  describes the structure of a model.Â124 119  125 120  … …  238 233  239 234 **name = "mymodel"** defines the name of the model that is shown to the user. 240  If it is not provided, it will use the name of the model file, with '_' 241  replaced by spaces and the parts capitalized. So *adsorbed_layer.py* will 242  become *Adsorbed Layer*. The predefined models all use the name of the 243  model file as the name of the model, so the default may be changed.  235 If it is not provided it will use the name of the model file. The name must  236 be a valid variable name, starting with a letter and contains only letters,  237 numbers or underscore. Spaces, dashes, and other symbols are not permitted. 244 238  245 239 **title = "short description"** is short description of the model which … …  303 297 - **"name"** is the name of the parameter shown on the FitPage. 304 298   299 - the name must be a valid variable name, starting with a letter and  300 containing only letters, numbers and underscore.  301  305 302 - parameter names should follow the mathematical convention; e.g., 306 303 *radius_core* not *core_radius*, or *sld_solvent* not *solvent_sld*. … …  371 368 dispersion. 372 369   370 Some models will have integer parameters, such as number of pearls in the  371 pearl necklace model, or number of shells in the multi-layer vesicle model.  372 The optimizers in BUMPS treat all parameters as floating point numbers which  373 can take arbitrary values, even for integer parameters, so your model should  374 round the incoming parameter value to the nearest integer inside your model  375 you should round to the nearest integer. In C code, you can do this using::  376   377 static double  378 Iq(double q, ..., double fp_n, ...)  379 {  380 int n = (int)(fp_n + 0.5);  381 ...  382 }  383   384 in python::  385   386 def Iq(q, ..., n, ...):  387 n = int(n+0.5)  388 ...  389   390 Derivative based optimizers such as Levenberg-Marquardt will not work  391 for integer parameters since the partial derivative is always zero, but  392 the remaining optimizers (DREAM, differential evolution, Nelder-Mead simplex)  393 will still function. 373 394  374 395 Model Computation … …  396 417 \approx \frac{\sum_{i=1}^n G(x_i) I(q,x_i)}{\sum_{i=1}^n G(x_i) V(x_i)} 397 418  398  That is, the indiv dual models do not need to include polydispersity 419 That is, the individual models do not need to include polydispersity 399 420 calculations, but instead rely on numerical integration to compute the 400 421 appropriately smeared pattern. Angular dispersion values over polar angle … …  416 437 The parameters *par1, par2, ...* are the list of non-orientation parameters 417 438 to the model in the order that they appear in the parameter table. 418  **Note that the auto generated model file uses** *x* **rather than** *q*. 439 **Note that the auto-generated model file uses** *x* **rather than** *q*. 419 440  420 441 The *.py* file should import trigonometric and exponential functions from … …  613 634  614 635 sas_gamma(x): 615  Gamma function $\text{sas_gamma}(x) = \Gamma(x)$. 636 Gamma function sas_gamma\ $(x) = \Gamma(x)$. 616 637  617 638 The standard math function, tgamma(x) is unstable for $x < 1$ … …  623 644 sas_erf(x), sas_erfc(x): 624 645 Error function 625  $\text{sas_erf}(x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,dt$ 646 sas_erf\ $(x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,dt$ 626 647 and complementary error function 627  $\text{sas_erfc}(x) = \frac{2}{\sqrt\pi}\int_x^{\infty} e^{-t^2}\,dt$. 648 sas_erfc\ $(x) = \frac{2}{\sqrt\pi}\int_x^{\infty} e^{-t^2}\,dt$. 628 649  629 650 The standard math functions erf(x) and erfc(x) are slower and broken … …  634 655  635 656 sas_J0(x): 636  Bessel function of the first kind $\text{sas_J0}(x)=J_0(x)$ where 657 Bessel function of the first kind sas_J0\ $(x)=J_0(x)$ where 637 658 $J_0(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin(\tau))\,d\tau$. 638 659  … …  643 664  644 665 sas_J1(x): 645  Bessel function of the first kind $\text{sas_J1}(x)=J_1(x)$ where 666 Bessel function of the first kind sas_J1\ $(x)=J_1(x)$ where 646 667 $J_1(x) = \frac{1}{\pi}\int_0^\pi \cos(\tau - x\sin(\tau))\,d\tau$. 647 668  … …  652 673  653 674 sas_JN(n, x): 654  Bessel function of the first kind and integer order $n$ :Â655  $\text{sas_JN}(n, x)=J_n(x)$ where 675 Bessel function of the first kind and integer order $n$,  676 sas_JN\ $(n, x) =J_n(x)$ where 656 677 $J_n(x) = \frac{1}{\pi}\int_0^\pi \cos(n\tau - x\sin(\tau))\,d\tau$. 657  If $n$ = 0 or 1, it uses sas_J0( x) or sas_J1(x), respectively. 678 If $n$ = 0 or 1, it uses sas_J0($x$) or sas_J1($x$), respectively. 658 679  659 680 The standard math function jn(n, x) is not available on all platforms. … …  663 684  664 685 sas_Si(x): 665  Sine integral $\text{Si}(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. 686 Sine integral Si\ $(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. 666 687  667 688 This function uses Taylor series for small and large arguments: … …  688 709 sas_3j1x_x(x): 689 710 Spherical Bessel form 690  $\text{sph_j1c}(x) = 3 j_1(x)/x = 3 (\sin(x) - x \cos(x))/x^3$, 711 sph_j1c\ $(x) = 3 j_1(x)/x = 3 (\sin(x) - x \cos(x))/x^3$, 691 712 with a limiting value of 1 at $x=0$, where $j_1(x)$ is the spherical 692 713 Bessel function of the first kind and first order. … …  699 720  700 721 sas_2J1x_x(x): 701  Bessel form $\text{sas_J1c}(x) = 2 J_1(x)/x$, with a limiting value 722 Bessel form sas_J1c\ $(x) = 2 J_1(x)/x$, with a limiting value 702 723 of 1 at $x=0$, where $J_1(x)$ is the Bessel function of first kind 703 724 and first order. … …  940 961 across multiple parameters can be very slow). 941 962  942  If your model has 2D orientation alcalculation, then you should also 963 If your model has 2D orientation calculation, then you should also 943 964 test with:: 944 965  -
doc/guide/resolution.rst
rf8a2baa r1f058ea  17 17 resolution contribution into a model calculation/simulation (which by definition 18 18 will be exact) to make it more representative of what has been measured 19  experimentally - a process called *smearing*. sasmodels does the latter. 19 experimentally - a process called *smearing*. Sasmodels does the latter. 20 20  21 21 Both smearing and desmearing rely on functions to describe the resolution 22  effect. sasmodels provides three smearing algorithms: 22 effect. Sasmodels provides three smearing algorithms: 23 23  24 24 * *Slit Smearing* … …  99 99  100 100 For discrete $q$ values, at the $q$ values of the data points and at the $q$ 101  values extended up to $q_N = q_i + \Delta q_ v$ the smeared 101 values extended up to $q_N = q_i + \Delta q_u$ the smeared 102 102 intensity can be approximately calculated as 103 103  … …  212 212 elliptical Gaussian distribution. The $A$ is a normalization factor. 213 213  214  .. figure:: resolution_2d_rotation. gif 214 .. figure:: resolution_2d_rotation.png 215 215  216 216 Coordinate axis rotation for 2D resolution calculation. -
doc/guide/scripting.rst
r2e66ef5 r4aa5dce  69 69 $ bumps example/model.py --preview 70 70   71 Note that bumps and sasmodels are included as part of the SasView  72 distribution. On windows, bumps can be called from the cmd prompt  73 as follows::  74   75 SasViewCom bumps.cli example/model.py --preview  76  71 77 Using sasmodels directly 72 78 ======================== … …  105 111 plt.loglog(q, Iq) 106 112 plt.show()  113   114 On windows, this can be called from the cmd prompt using sasview as::  115   116 SasViewCom example/cylinder_eval.py -
doc/rst_prolog
ra0fb06a r30b60d2  1 1 .. Set up some substitutions to make life easier... 2  .. Remove |biggamma|, etc. when they are no longer needed.Â3 2  4  Â5  .. |alpha| unicode:: U+03B1Â6  .. |beta| unicode:: U+03B2Â7  .. |gamma| unicode:: U+03B3Â8  .. |delta| unicode:: U+03B4Â9  .. |epsilon| unicode:: U+03B5Â10  .. |zeta| unicode:: U+03B6Â11  .. |eta| unicode:: U+03B7Â12  .. |theta| unicode:: U+03B8Â13  .. |iota| unicode:: U+03B9Â14  .. |kappa| unicode:: U+03BAÂ15  .. |lambda| unicode:: U+03BBÂ16  .. |mu| unicode:: U+03BCÂ17  .. |nu| unicode:: U+03BDÂ18  .. |xi| unicode:: U+03BEÂ19  .. |omicron| unicode:: U+03BFÂ20  .. |pi| unicode:: U+03C0Â21  .. |rho| unicode:: U+03C1Â22  .. |sigma| unicode:: U+03C3Â23  .. |tau| unicode:: U+03C4Â24  .. |upsilon| unicode:: U+03C5Â25  .. |phi| unicode:: U+03C6Â26  .. |chi| unicode:: U+03C7Â27  .. |psi| unicode:: U+03C8Â28  .. |omega| unicode:: U+03C9Â29  Â30  Â31  .. |biggamma| unicode:: U+0393Â32  .. |bigdelta| unicode:: U+0394Â33  .. |bigzeta| unicode:: U+039EÂ34  .. |bigpsi| unicode:: U+03A8Â35  .. |bigphi| unicode:: U+03A6Â36  .. |bigsigma| unicode:: U+03A3Â37  .. |Gamma| unicode:: U+0393Â38  .. |Delta| unicode:: U+0394Â39  .. |Zeta| unicode:: U+039EÂ40  .. |Psi| unicode:: U+03A8Â41  Â42  Â43  .. |drho| replace:: |Delta|\ |rho|Â44 3 .. |Ang| unicode:: U+212B 45 4 .. |Ang^-1| replace:: |Ang|\ :sup:`-1` … …  57 16 .. |cm^-3| replace:: cm\ :sup:`-3` 58 17 .. |sr^-1| replace:: sr\ :sup:`-1` 59  .. |P0| replace:: P\ :sub:`0`\Â60  .. |A2| replace:: A\ :sub:`2`\Â61  Â62  Â63  .. |equiv| unicode:: U+2261Â64  .. |noteql| unicode:: U+2260Â65  .. |TM| unicode:: U+2122Â66  Â67 18  68 19 .. |cdot| unicode:: U+00B7 -
example/oriented_usans.py
r1cd24b4 r74b0495  6 6  7 7 # Spherical particle data, not ellipsoids 8  sans, usans = load_data(' ../../sasview/sasview/test/1d_data/latex_smeared.xml') 8 sans, usans = load_data('latex_smeared.xml', index='all') 9 9 usans.qmin, usans.qmax = np.min(usans.x), np.max(usans.x) 10 10 usans.mask = (usans.x < 0.0) -
example/simul_fit.py
r1a4d4c0 r74b0495  1  #!/usr/bin/env pythonÂ2  # -*- coding: utf-8 -*-Â3  Â4  # To Sasview/documents/scriptsÂ5  Â6 1 from bumps.names import * 7 2 from sasmodels.core import load_model … …  9 4 from sasmodels.data import load_data, plot_data 10 5   6 # latex data, same sample usans and sans  7 # particles radius ~2300, uniform dispersity  8 datasets = load_data('latex_smeared.xml', index='all')  9 #[print(data) for data in datasets] 11 10  12  """ IMPORT THE DATA USED """ 13  datafiles = ['latex_smeared_out_0.txt', 'latex_smeared_out_1.txt'] 14  datasets = [load_data(el) for el in datafiles]  11 # A single sphere model to share between the datasets. We will use  12 # FreeVariables below to set the parameters that are independent between  13 # the datasets.  14 kernel = load_model('sphere')  15 pars = dict(scale=0.01, background=0.0, sld=5.0, sld_solvent=0.0, radius=1500.,  16 #radius_pd=0.1, radius_pd_n=35,  17 )  18 model = Model(kernel, **pars) 15 19  16  for data in datasets: 17  data.qmin = 0.0 18  data.qmax = 10.0  20 # radius and polydispersity (if any) are shared  21 model.radius.range(0, inf)  22 #model.radius_pd.range(0, 1) 19 23  20  #sphere model 21  kernel = load_model('sphere', dtype="single") 22  pars = dict(scale=0.01, background=0.0, sld=1.0, sld_solvent=6.0, radius=1500.) 23  model = Model(kernel, **pars) 24  model.radius.range(0, inf) 25  #model.background.range(-inf, inf) 26  #model.scale.range(0, inf) 27  model.sld.range(-inf, inf) 28  model.sld_solvent.range(-inf, inf)  24 # Contrast and dilution are the same for both measurements, but are not  25 # separable with a single measurement (i.e., I(q) ~ F(q) contrast^2 Vf),  26 # so fit one of scale, sld or solvent sld. With absolute scaling from  27 # data reduction, can use the same parameter for both datasets.  28 model.scale.range(0, inf)  29 #model.sld.range(-inf, inf)  30 #model.sld_solvent.range(-inf, inf) 29 31   32 # Background is different for sans and usans so set it as a free variable  33 # in the model. 30 34 free = FreeVariables( 31  names=[data. filenamefor data in datasets], 35 names=[data.run[0] for data in datasets], 32 36 background=model.background, 33  scale=model.scale,Â34 37 ) 35 38 free.background.range(-inf, inf) 36  free.scale.range(0, inf)Â37 39  38  M = [Experiment(data=data, model=model) for data in datasets]  40 # Note: can access the parameters for the individual models using  41 # free.background[0] and free.background[1], setting constraints or  42 # ranges as appropriate.  43   44 # For more complex systems where different datasets require independent models,  45 # separate models can be defined, with parameters tied together using  46 # constraint expressions. For example, the following could be used to fit  47 # data set 1 to spheres and data set 2 to cylinders of the same volume:  48 # model1 = Model(load_model('sphere'))  49 # model2 = Model(load_model('cylinder'))  50 # model1.sld = model2.sld  51 # model1.sld_solvent = model2.sld_solvent  52 # model1.scale = model2.scale  53 # # set cylinders and spheres to the same volume  54 # model1.radius = (3/4*model2.radius**2*model2.length)**(1/3)  55 # model1.background.range(0, 2)  56 # model2.background.range(0, 2)  57   58 # Setup the experiments, sharing the same model across all datasets.  59 M = [Experiment(data=data, model=model, name=data.run[0]) for data in datasets] 39 60  40 61 problem = FitProblem(M, freevars=free) 41  Â42  print(problem._parameters) -
explore/precision.py
r487e695 r237c9cf  81 81 elif xrange == "linear": 82 82 lin_min, lin_max, lin_steps = 1, 1000, 2000  83 lin_min, lin_max, lin_steps = 0.001, 2, 2000 83 84 elif xrange == "log": 84 85 log_min, log_max, log_steps = -3, 5, 400 … …  321 322 np_function=lambda x: np.fmod(x, 2*np.pi), 322 323 ocl_function=make_ocl("return fmod(q, 2*M_PI);", "sas_fmod"),  324 )  325 add_function(  326 name="debye",  327 mp_function=lambda x: 2*(mp.exp(-x**2) + x**2 - 1)/x**4,  328 np_function=lambda x: 2*(np.expm1(-x**2) + x**2)/x**4,  329 ocl_function=make_ocl("""  330 const double qsq = q*q;  331 if (qsq < 1.0) { // Pade approximation  332 const double x = qsq;  333 if (0) { // 0.36 single  334 // PadeApproximant[2*Exp[-x^2] + x^2-1)/x^4, {x, 0, 4}]  335 return (x*x/180. + 1.)/((1./30.*x + 1./3.)*x + 1);  336 } else if (0) { // 1.0 for single  337 // padeapproximant[2*exp[-x^2] + x^2-1)/x^4, {x, 0, 6}]  338 const double A1=1./24., A2=1./84, A3=-1./3360;  339 const double B1=3./8., B2=3./56., B3=1./336.;  340 return (((A3*x + A2)*x + A1)*x + 1.)/(((B3*x + B2)*x + B1)*x + 1.);  341 } else if (1) { // 1.0 for single, 0.25 for double  342 // PadeApproximant[2*Exp[-x^2] + x^2-1)/x^4, {x, 0, 8}]  343 const double A1=1./15., A2=1./60, A3=0., A4=1./75600.;  344 const double B1=2./5., B2=1./15., B3=1./180., B4=1./5040.;  345 return ((((A4*x + A3)*x + A2)*x + A1)*x + 1.)  346 /((((B4*x + B3)*x + B2)*x + B1)*x + 1.);  347 } else { // 1.0 for single, 0.5 for double  348 // PadeApproximant[2*Exp[-x^2] + x^2-1)/x^4, {x, 0, 8}]  349 const double A1=1./12., A2=2./99., A3=1./2640., A4=1./23760., A5=-1./1995840.;  350 const double B1=5./12., B2=5./66., B3=1./132., B4=1./2376., B5=1./95040.;  351 return (((((A5*x + A4)*x + A3)*x + A2)*x + A1)*x + 1.)  352 /(((((B5*x + B4)*x + B3)*x + B2)*x + B1)*x + 1.);  353 }  354 } else if (qsq < 1.) { // Taylor series; 0.9 for single, 0.25 for double  355 const double x = qsq;  356 const double C0 = +1.;  357 const double C1 = -1./3.;  358 const double C2 = +1./12.;  359 const double C3 = -1./60.;  360 const double C4 = +1./360.;  361 const double C5 = -1./2520.;  362 const double C6 = +1./20160.;  363 const double C7 = -1./181440.;  364 //return ((((C5*x + C4)*x + C3)*x + C2)*x + C1)*x + C0;  365 //return (((((C6*x + C5)*x + C4)*x + C3)*x + C2)*x + C1)*x + C0;  366 return ((((((C7*x + C6)*x + C5)*x + C4)*x + C3)*x + C2)*x + C1)*x + C0;  367 } else {  368 return 2.*(expm1(-qsq) + qsq)/(qsq*qsq);  369 }  370 """, "sas_debye"), 323 371 ) 324 372  -
sasmodels/bumps_model.py
r3330bb4 r74b0495  133 133 """ 134 134 _cache = None # type: Dict[str, np.ndarray] 135  def __init__(self, data, model, cutoff=1e-5 ): 135 def __init__(self, data, model, cutoff=1e-5, name=None): 136 136 # type: (Data, Model, float) -> None 137 137 # remember inputs so we can inspect from outside  138 self.name = data.filename if name is None else name 138 139 self.model = model 139 140 self.cutoff = cutoff … …  204 205 """ 205 206 data, theory, resid = self._data, self.theory(), self.residuals() 206  plot_theory(data, theory, resid, view, Iq_calc=self.Iq_calc)  207 # TODO: hack to display oriented usans 2-D pattern  208 Iq_calc = self.Iq_calc if isinstance(self.Iq_calc, tuple) else None  209 plot_theory(data, theory, resid, view, Iq_calc=Iq_calc) 207 210  208 211 def simulate_data(self, noise=None): -
sasmodels/compare.py
r630156b rced5bd2  56 56  57 57 USAGE = """ 58  usage: compare.py model N1 N2 [options...] [key=val] 59   60  Compare the speed and value for a model between the SasView original and the 61  sasmodels rewrite.  58 usage: sascomp model [options...] [key=val]  59   60 Generate and compare SAS models. If a single model is specified it shows  61 a plot of that model. Different models can be compared, or the same model  62 with different parameters. The same model with the same parameters can  63 be compared with different calculation engines to see the effects of precision  64 on the resultant values. 62 65  63 66 model or model1,model2 are the names of the models to compare (see below). 64  N1 is the number of times to run sasmodels (default=1).Â65  N2 is the number times to run sasview (default=1).Â66 67  67 68 Options (* for default): 68 69  69  -plot*/-noplot plots or suppress the plot of the model  70 === data generation ===  71 -data="path" uses q, dq from the data file  72 -noise=0 sets the measurement error dI/I  73 -res=0 sets the resolution width dQ/Q if calculating with resolution 70 74 -lowq*/-midq/-highq/-exq use q values up to 0.05, 0.2, 1.0, 10.0  75 -q=min:max alternative specification of qrange 71 76 -nq=128 sets the number of Q points in the data set  77 -1d*/-2d computes 1d or 2d data 72 78 -zero indicates that q=0 should be included 73  -1d*/-2d computes 1d or 2d data  79   80 === model parameters === 74 81 -preset*/-random[=seed] preset or random parameters  82 -sets=n generates n random datasets with the seed given by -random=seed  83 -pars/-nopars* prints the parameter set or not  84 -default/-demo* use demo vs default parameters  85   86 === calculation options === 75 87 -mono*/-poly force monodisperse or allow polydisperse demo parameters  88 -cutoff=1e-5* cutoff value for including a point in polydispersity 76 89 -magnetic/-nonmagnetic* suppress magnetism 77  -cutoff=1e-5* cutoff value for including a point in polydispersityÂ78  -pars/-nopars* prints the parameter set or notÂ79  -abs/-rel* plot relative or absolute errorÂ80  -linear/-log*/-q4 intensity scalingÂ81  -hist/-nohist* plot histogram of relative errorÂ82  -res=0 sets the resolution width dQ/Q if calculating with resolutionÂ83 90 -accuracy=Low accuracy of the resolution calculation Low, Mid, High, Xhigh 84  -edit starts the parameter explorer 85  -default/-demo* use demo vs default parameters 86  -help/-html shows the model docs instead of running the model 87  -title="note" adds note to the plot title, after the model name 88  -data="path" uses q, dq from the data file 89   90  Any two calculation engines can be selected for comparison: 91    91 -neval=1 sets the number of evals for more accurate timing  92   93 === precision options ===  94 -calc=default uses the default calcution precision 92 95 -single/-double/-half/-fast sets an OpenCL calculation engine 93 96 -single!/-double!/-quad! sets an OpenMP calculation engine 94 97 -sasview sets the sasview calculation engine 95 98  96  The default is -single -double. Note that the interpretation of quad 97  precision depends on architecture, and may vary from 64-bit to 128-bit, 98  with 80-bit floats being common (1e-19 precision).  99 === plotting ===  100 -plot*/-noplot plots or suppress the plot of the model  101 -linear/-log*/-q4 intensity scaling on plots  102 -hist/-nohist* plot histogram of relative error  103 -abs/-rel* plot relative or absolute error  104 -title="note" adds note to the plot title, after the model name  105   106 === output options ===  107 -edit starts the parameter explorer  108 -help/-html shows the model docs instead of running the model  109   110 The interpretation of quad precision depends on architecture, and may  111 vary from 64-bit to 128-bit, with 80-bit floats being common (1e-19 precision).  112 On unix and mac you may need single quotes around the DLL computation  113 engines, such as -calc='single!,double!' since !, is treated as a history  114 expansion request in the shell. 99 115  100 116 Key=value pairs allow you to set specific values for the model parameters. 101  Key=value1,value2 to compare different values of the same parameter. 102  value can be an expression including other parameters  117 Key=value1,value2 to compare different values of the same parameter. The  118 value can be an expression including other parameters.  119   120 Items later on the command line override those that appear earlier.  121   122 Examples:  123   124 # compare single and double precision calculation for a barbell  125 sascomp barbell -calc=single,double  126   127 # generate 10 random lorentz models, with seed=27  128 sascomp lorentz -sets=10 -seed=27  129   130 # compare ellipsoid with R = R_polar = R_equatorial to sphere of radius R  131 sascomp sphere,ellipsoid radius_polar=radius radius_equatorial=radius  132   133 # model timing test requires multiple evals to perform the estimate  134 sascomp pringle -calc=single,double -timing=100,100 -noplot 103 135 """ 104 136  … …  111 143 ------------------- 112 144  113  """ 114  + USAGE)  145 """ + USAGE) 115 146  116 147 kerneldll.ALLOW_SINGLE_PRECISION_DLLS = True … …  264 295  265 296  266  def _randomize_one(model_info, p, v): 297 def _randomize_one(model_info, name, value): 267 298 # type: (ModelInfo, str, float) -> float 268 299 # type: (ModelInfo, str, str) -> str … …  270 301 Randomize a single parameter. 271 302 """ 272  if any(p.endswith(s) for s in ('_pd', '_pd_n', '_pd_nsigma', '_pd_type')): 273  return v 274   275  # Find the parameter definition 276  for par in model_info.parameters.call_parameters: 277  if par.name == p: 278  break 279  else: 280  raise ValueError("unknown parameter %r"%p) 281    303 # Set the amount of polydispersity/angular dispersion, but by default pd_n  304 # is zero so there is no polydispersity. This allows us to turn on/off  305 # pd by setting pd_n, and still have randomly generated values  306 if name.endswith('_pd'):  307 par = model_info.parameters[name[:-3]]  308 if par.type == 'orientation':  309 # Let oriention variation peak around 13 degrees; 95% < 42 degrees  310 return 180*np.random.beta(2.5, 20)  311 else:  312 # Let polydispersity peak around 15%; 95% < 0.4; max=100%  313 return np.random.beta(1.5, 7)  314   315 # pd is selected globally rather than per parameter, so set to 0 for no pd  316 # In particular, when multiple pd dimensions, want to decrease the number  317 # of points per dimension for faster computation  318 if name.endswith('_pd_n'):  319 return 0  320   321 # Don't mess with distribution type for now  322 if name.endswith('_pd_type'):  323 return 'gaussian'  324   325 # type-dependent value of number of sigmas; for gaussian use 3.  326 if name.endswith('_pd_nsigma'):  327 return 3.  328   329 # background in the range [0.01, 1]  330 if name == 'background':  331 return 10**np.random.uniform(-2, 0)  332   333 # scale defaults to 0.1% to 30% volume fraction  334 if name == 'scale':  335 return 10**np.random.uniform(-3, -0.5)  336   337 # If it is a list of choices, pick one at random with equal probability  338 # In practice, the model specific random generator will override.  339 par = model_info.parameters[name] 282 340 if len(par.limits) > 2: # choice list 283 341 return np.random.randint(len(par.limits)) 284 342  285  limits = par.limits 286  if not np.isfinite(limits).all(): 287  low, high = parameter_range(p, v) 288  limits = (max(limits[0], low), min(limits[1], high)) 289    343 # If it is a fixed range, pick from it with equal probability.  344 # For logarithmic ranges, the model will have to override.  345 if np.isfinite(par.limits).all():  346 return np.random.uniform(*par.limits)  347   348 # If the paramter is marked as an sld use the range of neutron slds  349 # TODO: ought to randomly contrast match a pair of SLDs  350 if par.type == 'sld':  351 return np.random.uniform(-0.5, 12)  352   353 # Limit magnetic SLDs to a smaller range, from zero to iron=5/A^2  354 if par.name.startswith('M0:'):  355 return np.random.uniform(0, 5)  356   357 # Guess at the random length/radius/thickness. In practice, all models  358 # are going to set their own reasonable ranges.  359 if par.type == 'volume':  360 if ('length' in par.name or  361 'radius' in par.name or  362 'thick' in par.name):  363 return 10**np.random.uniform(2, 4)  364   365 # In the absence of any other info, select a value in [0, 2v], or  366 # [-2|v|, 2|v|] if v is negative, or [0, 1] if v is zero. Mostly the  367 # model random parameter generators will override this default.  368 low, high = parameter_range(par.name, value)  369 limits = (max(par.limits[0], low), min(par.limits[1], high)) 290 370 return np.random.uniform(*limits) 291 371  292   293  def randomize_pars(model_info, pars, seed=None): 294  # type: (ModelInfo, ParameterSet, int) -> ParameterSet  372 def _random_pd(model_info, pars):  373 pd = [p for p in model_info.parameters.kernel_parameters if p.polydisperse]  374 pd_volume = []  375 pd_oriented = []  376 for p in pd:  377 if p.type == 'orientation':  378 pd_oriented.append(p.name)  379 elif p.length_control is not None:  380 n = int(pars.get(p.length_control, 1) + 0.5)  381 pd_volume.extend(p.name+str(k+1) for k in range(n))  382 elif p.length > 1:  383 pd_volume.extend(p.name+str(k+1) for k in range(p.length))  384 else:  385 pd_volume.append(p.name)  386 u = np.random.rand()  387 n = len(pd_volume)  388 if u < 0.01 or n < 1:  389 pass # 1% chance of no polydispersity  390 elif u < 0.86 or n < 2:  391 pars[np.random.choice(pd_volume)+"_pd_n"] = 35  392 elif u < 0.99 or n < 3:  393 choices = np.random.choice(len(pd_volume), size=2)  394 pars[pd_volume[choices[0]]+"_pd_n"] = 25  395 pars[pd_volume[choices[1]]+"_pd_n"] = 10  396 else:  397 choices = np.random.choice(len(pd_volume), size=3)  398 pars[pd_volume[choices[0]]+"_pd_n"] = 25  399 pars[pd_volume[choices[1]]+"_pd_n"] = 10  400 pars[pd_volume[choices[2]]+"_pd_n"] = 5  401 if pd_oriented:  402 pars['theta_pd_n'] = 20  403 if np.random.rand() < 0.1:  404 pars['phi_pd_n'] = 5  405 if np.random.rand() < 0.1:  406 if any(p.name == 'psi' for p in model_info.parameters.kernel_parameters):  407 #print("generating psi_pd_n")  408 pars['psi_pd_n'] = 5  409   410 ## Show selected polydispersity  411 #for name, value in pars.items():  412 # if name.endswith('_pd_n') and value > 0:  413 # print(name, value, pars.get(name[:-5], 0), pars.get(name[:-2], 0))  414   415   416 def randomize_pars(model_info, pars):  417 # type: (ModelInfo, ParameterSet) -> ParameterSet 295 418 """ 296 419 Generate random values for all of the parameters. … …  301 424 :func:`constrain_pars` needs to be called afterward.. 302 425 """ 303  with push_seed(seed): 304  # Note: the sort guarantees order `of calls to random number generator 305  random_pars = dict((p, _randomize_one(model_info, p, v)) 306  for p, v in sorted(pars.items()))  426 # Note: the sort guarantees order of calls to random number generator  427 random_pars = dict((p, _randomize_one(model_info, p, v))  428 for p, v in sorted(pars.items()))  429 if model_info.random is not None:  430 random_pars.update(model_info.random())  431 _random_pd(model_info, random_pars) 307 432 return random_pars  433  308 434  309 435 def constrain_pars(model_info, pars): … …  318 444 Warning: this updates the *pars* dictionary in place. 319 445 """  446 # TODO: move the model specific code to the individual models 320 447 name = model_info.id 321 448 # if it is a product model, then just look at the form factor since … …  338 465 elif name == 'guinier': 339 466 # Limit guinier to an Rg such that Iq > 1e-30 (single precision cutoff)  467 # I(q) = A e^-(Rg^2 q^2/3) > e^-(30 ln 10)  468 # => ln A - (Rg^2 q^2/3) > -30 ln 10  469 # => Rg^2 q^2/3 < 30 ln 10 + ln A  470 # => Rg < sqrt(90 ln 10 + 3 ln A)/q 340 471 #q_max = 0.2 # mid q maximum 341 472 q_max = 1.0 # high q maximum … …  367 498 parameters = model_info.parameters 368 499 magnetic = False  500 magnetic_pars = [] 369 501 for p in parameters.user_parameters(pars, is2d): 370 502 if any(p.id.startswith(x) for x in ('M0:', 'mtheta:', 'mphi:')): 371 503 continue 372  if p.id.startswith('up:') and not magnetic:  504 if p.id.startswith('up:'):  505 magnetic_pars.append("%s=%s"%(p.id, pars.get(p.id, p.default))) 373 506 continue 374 507 fields = dict( … …  385 518 lines.append(_format_par(p.name, **fields)) 386 519 magnetic = magnetic or fields['M0'] != 0.  520 if magnetic and magnetic_pars:  521 lines.append(" ".join(magnetic_pars)) 387 522 return "\n".join(lines) 388 523  … …  399 534 % (pd, n, nsigma, nsigma, pdtype) 400 535 if M0 != 0.: 401  line += " M0:%.3f m phi:%.1f mtheta:%.1f" % (M0, mphi, mtheta) 536 line += " M0:%.3f mtheta:%.1f mphi:%.1f" % (M0, mtheta, mphi) 402 537 return line 403 538  404  def suppress_pd(pars ): 539 def suppress_pd(pars, suppress=True): 405 540 # type: (ParameterSet) -> ParameterSet 406 541 """ 407  Suppress theta_pd for now until the normalization is resolved. 408   409  May also suppress complete polydispersity of the model to test 410  models more quickly.  542 If suppress is True complete eliminate polydispersity of the model to test  543 models more quickly. If suppress is False, make sure at least one  544 parameter is polydisperse, setting the first polydispersity parameter to  545 15% if no polydispersity is given (with no explicit demo parameters given  546 in the model, there will be no default polydispersity). 411 547 """ 412 548 pars = pars.copy() 413  for p in pars: 414  if p.endswith("_pd_n"): pars[p] = 0  549 #print("pars=", pars)  550 if suppress:  551 for p in pars:  552 if p.endswith("_pd_n"):  553 pars[p] = 0  554 else:  555 any_pd = False  556 first_pd = None  557 for p in pars:  558 if p.endswith("_pd_n"):  559 pd = pars.get(p[:-2], 0.)  560 any_pd |= (pars[p] != 0 and pd != 0.)  561 if first_pd is None:  562 first_pd = p  563 if not any_pd and first_pd is not None:  564 if pars[first_pd] == 0:  565 pars[first_pd] = 35  566 if first_pd[:-2] not in pars or pars[first_pd[:-2]] == 0:  567 pars[first_pd[:-2]] = 0.15 415 568 return pars 416 569  417  def suppress_magnetism(pars ): 570 def suppress_magnetism(pars, suppress=True): 418 571 # type: (ParameterSet) -> ParameterSet 419 572 """ 420  Suppress theta_pd for now until the normalization is resolved. 421   422  May also suppress complete polydispersity of the model to test 423  models more quickly.  573 If suppress is True complete eliminate magnetism of the model to test  574 models more quickly. If suppress is False, make sure at least one sld  575 parameter is magnetic, setting the first parameter to have a strong  576 magnetic sld (8/A^2) at 60 degrees (with no explicit demo parameters given  577 in the model, there will be no default magnetism). 424 578 """ 425 579 pars = pars.copy() 426  for p in pars: 427  if p.startswith("M0:"): pars[p] = 0  580 if suppress:  581 for p in pars:  582 if p.startswith("M0:"):  583 pars[p] = 0  584 else:  585 any_mag = False  586 first_mag = None  587 for p in pars:  588 if p.startswith("M0:"):  589 any_mag |= (pars[p] != 0)  590 if first_mag is None:  591 first_mag = p  592 if not any_mag and first_mag is not None:  593 pars[first_mag] = 8. 428 594 return pars 429 595  … …  561 727 model = core.build_model(model_info, dtype=dtype, platform="ocl") 562 728 calculator = DirectModel(data, model, cutoff=cutoff) 563  calculator.engine = "OCL%s"%DTYPE_MAP[ dtype] 729 calculator.engine = "OCL%s"%DTYPE_MAP[str(model.dtype)] 564 730 return calculator 565 731  … …  571 737 model = core.build_model(model_info, dtype=dtype, platform="dll") 572 738 calculator = DirectModel(data, model, cutoff=cutoff) 573  calculator.engine = "OMP%s"%DTYPE_MAP[ dtype] 739 calculator.engine = "OMP%s"%DTYPE_MAP[str(model.dtype)] 574 740 return calculator 575 741  … …  603 769 'accuracy', 'is2d' and 'view' parsed from the command line. 604 770 """ 605  qm ax, nq, res =opts['qmax'], opts['nq'], opts['res'] 771 qmin, qmax, nq, res = opts['qmin'], opts['qmax'], opts['nq'], opts['res'] 606 772 if opts['is2d']: 607 773 q = np.linspace(-qmax, qmax, nq) # type: np.ndarray … …  612 778 else: 613 779 if opts['view'] == 'log' and not opts['zero']: 614  qmax = math.log10(qmax) 615  q = np.logspace(qmax-3, qmax, nq)  780 q = np.logspace(math.log10(qmin), math.log10(qmax), nq) 616 781 else: 617  q = np.linspace( 0.001*qmax, qmax, nq) 782 q = np.linspace(qmin, qmax, nq) 618 783 if opts['zero']: 619 784 q = np.hstack((0, q)) … …  632 797 if dtype == 'sasview': 633 798 return eval_sasview(model_info, data) 634  elif dtype.endswith('!'):  799 elif dtype is None or not dtype.endswith('!'):  800 return eval_opencl(model_info, data, dtype=dtype, cutoff=cutoff)  801 else: 635 802 return eval_ctypes(model_info, data, dtype=dtype[:-1], cutoff=cutoff) 636  else:Â637  return eval_opencl(model_info, data, dtype=dtype, cutoff=cutoff)Â638 803  639 804 def _show_invalid(data, theory): … …  662 827 parameters. 663 828 """ 664  result = run_models(opts, verbose=True) 665  if opts['plot']: # Note: never called from explore 666  plot_models(opts, result, limits=limits)  829 limits = np.Inf, -np.Inf  830 for k in range(opts['sets']):  831 opts['pars'] = parse_pars(opts)  832 if opts['pars'] is None:  833 return  834 result = run_models(opts, verbose=True)  835 if opts['plot']:  836 limits = plot_models(opts, result, limits=limits, setnum=k)  837 if opts['plot']:  838 import matplotlib.pyplot as plt  839 plt.show() 667 840  668 841 def run_models(opts, verbose=False): 669 842 # type: (Dict[str, Any]) -> Dict[str, Any] 670 843  671  n_base, n_comp = opts['count'] 672  pars, pars2 = opts['pars']  844 base, comp = opts['engines']  845 base_n, comp_n = opts['count']  846 base_pars, comp_pars = opts['pars'] 673 847 data = opts['data'] 674 848  675  # silence the linter 676  base = opts['engines'][0] if n_base else None 677  comp = opts['engines'][1] if n_comp else None  849 comparison = comp is not None 678 850  679 851 base_time = comp_time = None … …  681 853  682 854 # Base calculation 683  if n_base > 0:  855 try:  856 base_raw, base_time = time_calculation(base, base_pars, base_n)  857 base_value = np.ma.masked_invalid(base_raw)  858 if verbose:  859 print("%s t=%.2f ms, intensity=%.0f"  860 % (base.engine, base_time, base_value.sum()))  861 _show_invalid(data, base_value)  862 except ImportError:  863 traceback.print_exc()  864   865 # Comparison calculation  866 if comparison: 684 867 try: 685  base_raw, base_time = time_calculation(base, pars, n_base) 686  base_value = np.ma.masked_invalid(base_raw) 687  if verbose: 688  print("%s t=%.2f ms, intensity=%.0f" 689  % (base.engine, base_time, base_value.sum())) 690  _show_invalid(data, base_value) 691  except ImportError: 692  traceback.print_exc() 693  n_base = 0 694   695  # Comparison calculation 696  if n_comp > 0: 697  try: 698  comp_raw, comp_time = time_calculation(comp, pars2, n_comp)  868 comp_raw, comp_time = time_calculation(comp, comp_pars, comp_n) 699 869 comp_value = np.ma.masked_invalid(comp_raw) 700 870 if verbose: … …  704 874 except ImportError: 705 875 traceback.print_exc() 706  n_comp = 0Â707 876  708 877 # Compare, but only if computing both forms 709  if n_base > 0 and n_comp > 0: 878 if comparison: 710 879 resid = (base_value - comp_value) 711 880 relerr = resid/np.where(comp_value != 0., abs(comp_value), 1.0) … …  743 912  744 913  745  def plot_models(opts, result, limits= None): 914 def plot_models(opts, result, limits=(np.Inf, -np.Inf), setnum=0): 746 915 # type: (Dict[str, Any], Dict[str, Any], Optional[Tuple[float, float]]) -> Tuple[float, float] 747  base_value, comp_value = result['base_value'], result['comp_value'] 916 base_value, comp_value = result['base_value'], result['comp_value'] 748 917 base_time, comp_time = result['base_time'], result['comp_time'] 749 918 resid, relerr = result['resid'], result['relerr'] 750 919  751 920 have_base, have_comp = (base_value is not None), (comp_value is not None) 752  base = opts['engines'][0] if have_base else None 753  comp = opts['engines'][1] if have_comp else None  921 base, comp = opts['engines'] 754 922 data = opts['data'] 755 923 use_data = (opts['datafile'] is not None) and (have_base ^ have_comp) … …  758 926 view = opts['view'] 759 927 import matplotlib.pyplot as plt 760  if limits is None and not use_data: 761  vmin, vmax = np.Inf, -np.Inf 762  if have_base: 763  vmin = min(vmin, base_value.min()) 764  vmax = max(vmax, base_value.max())  928 vmin, vmax = limits  929 if have_base:  930 vmin = min(vmin, base_value.min())  931 vmax = max(vmax, base_value.max())  932 if have_comp:  933 vmin = min(vmin, comp_value.min())  934 vmax = max(vmax, comp_value.max())  935 limits = vmin, vmax  936   937 if have_base: 765 938 if have_comp: 766  vmin = min(vmin, comp_value.min()) 767  vmax = max(vmax, comp_value.max()) 768  limits = vmin, vmax 769   770  if have_base: 771  if have_comp: plt.subplot(131)  939 plt.subplot(131) 772 940 plot_theory(data, base_value, view=view, use_data=use_data, limits=limits) 773 941 plt.title("%s t=%.2f ms"%(base.engine, base_time)) 774 942 #cbar_title = "log I" 775 943 if have_comp: 776  if have_base: plt.subplot(132)  944 if have_base:  945 plt.subplot(132) 777 946 if not opts['is2d'] and have_base: 778 947 plot_theory(data, base_value, view=view, use_data=use_data, limits=limits) … …  786 955 else: 787 956 err, errstr, errview = abs(relerr), "rel err", "log"  957 if (err == 0.).all():  958 errview = 'linear' 788 959 if 0: # 95% cutoff 789 960 sorted = np.sort(err.flatten()) 790 961 cutoff = sorted[int(sorted.size*0.95)] 791  err[err >cutoff] = cutoff 962 err[err > cutoff] = cutoff 792 963 #err,errstr = base/comp,"ratio" 793  plot_theory(data, None, resid=err, view=errview, use_data=use_data) 794  if view == 'linear': 795  plt.xscale('linear')  964 plot_theory(data, None, resid=err, view=view, use_data=use_data)  965 plt.yscale(errview) 796 966 plt.title("max %s = %.3g"%(errstr, abs(err).max())) 797 967 #cbar_title = errstr if errview=="linear" else "log "+errstr … …  813 983 plt.title('Distribution of relative error between calculation engines') 814 984  815  if not opts['explore']:Â816  plt.show()Â817  Â818 985 return limits 819  Â820  Â821 986  822 987  823 988 # =========================================================================== 824 989 # 825  NAME_OPTIONS = set([  990   991 # Set of command line options.  992 # Normal options such as -plot/-noplot are specified as 'name'.  993 # For options such as -nq=500 which require a value use 'name='.  994 #  995 OPTIONS = [  996 # Plotting 826 997 'plot', 'noplot', 827  'half', 'fast', 'single', 'double', 828  'single!', 'double!', 'quad!', 'sasview',  998 'linear', 'log', 'q4',  999 'rel', 'abs',  1000 'hist', 'nohist',  1001 'title=',  1002   1003 # Data generation  1004 'data=', 'noise=', 'res=', 'nq=', 'q=', 829 1005 'lowq', 'midq', 'highq', 'exq', 'zero', 830 1006 '2d', '1d', 831  'preset', 'random', 832  'poly', 'mono',  1007   1008 # Parameter set  1009 'preset', 'random', 'random=', 'sets=',  1010 'demo', 'default', # TODO: remove demo/default  1011 'nopars', 'pars',  1012   1013 # Calculation options  1014 'poly', 'mono', 'cutoff=', 833 1015 'magnetic', 'nonmagnetic', 834  ' nopars', 'pars',Â835  ' rel', 'abs',Â836  'linear', 'log', 'q4', 837  'hist', 'nohist',Â838  ' edit', 'html', 'help',Â839  ' demo', 'default',Â840  ])Â841  VALUE_OPTIONS = [ 842  # Note: random is both a name option and a value optionÂ843  ' cutoff', 'random', 'nq', 'res', 'accuracy', 'title', 'data', 1016 'accuracy=',  1017 'neval=', # for timing...  1018   1019 # Precision options  1020 'calc=',  1021 'half', 'fast', 'single', 'double', 'single!', 'double!', 'quad!',  1022 'sasview', # TODO: remove sasview 3.x support  1023   1024 # Output options  1025 'help', 'html', 'edit', 844 1026 ]  1027   1028 NAME_OPTIONS = set(k for k in OPTIONS if not k.endswith('='))  1029 VALUE_OPTIONS = [k[:-1] for k in OPTIONS if k.endswith('=')]  1030  845 1031  846 1032 def columnize(items, indent="", width=79): … …  884 1070  885 1071 # Plug in values given in demo 886  if use_demo : 1072 if use_demo and model_info.demo: 887 1073 pars.update(model_info.demo) 888 1074 return pars … …  902 1088 # not sure about brackets [] {} 903 1089 # maybe one of the following @ ? ^ ! , 904  MODEL_SPLIT = ',' 1090 PAR_SPLIT = ',' 905 1091 def parse_opts(argv): 906 1092 # type: (List[str]) -> Dict[str, Any] … …  914 1100 if not arg.startswith('-') and '=' in arg] 915 1101 positional_args = [arg for arg in argv 916  if not arg.startswith('-') and '=' not in arg] 1102 if not arg.startswith('-') and '=' not in arg] 917 1103 models = "\n ".join("%-15s"%v for v in MODELS) 918 1104 if len(positional_args) == 0: … …  921 1107 print(columnize(MODELS, indent=" ")) 922 1108 return None 923  if len(positional_args) > 3:Â924  print("expected parameters: model N1 N2")Â925 1109  926 1110 invalid = [o[1:] for o in flags … …  931 1115 return None 932 1116  933  name = positional_args[0] 934  n1 = int(positional_args[1]) if len(positional_args) > 1 else 1 935  n2 = int(positional_args[2]) if len(positional_args) > 2 else 1  1117 name = positional_args[-1] 936 1118  937 1119 # pylint: disable=bad-whitespace … …  941 1123 'view' : 'log', 942 1124 'is2d' : False,  1125 'qmin' : None, 943 1126 'qmax' : 0.05, 944 1127 'nq' : 128, 945 1128 'res' : 0.0,  1129 'noise' : 0.0, 946 1130 'accuracy' : 'Low', 947  'cutoff' : 0.0, 1131 'cutoff' : '0.0', 948 1132 'seed' : -1, # default to preset 949 1133 'mono' : True, … …  959 1143 'title' : None, 960 1144 'datafile' : None,  1145 'sets' : 0,  1146 'engine' : 'default',  1147 'evals' : '1', 961 1148 } 962  engines = []Â963 1149 for arg in flags: 964 1150 if arg == '-noplot': opts['plot'] = False … …  975 1161 elif arg == '-zero': opts['zero'] = True 976 1162 elif arg.startswith('-nq='): opts['nq'] = int(arg[4:])  1163 elif arg.startswith('-q='):  1164 opts['qmin'], opts['qmax'] = [float(v) for v in arg[3:].split(':')] 977 1165 elif arg.startswith('-res='): opts['res'] = float(arg[5:])  1166 elif arg.startswith('-noise='): opts['noise'] = float(arg[7:])  1167 elif arg.startswith('-sets='): opts['sets'] = int(arg[6:]) 978 1168 elif arg.startswith('-accuracy='): opts['accuracy'] = arg[10:] 979  elif arg.startswith('-cutoff='): opts['cutoff'] = float(arg[8:]) 1169 elif arg.startswith('-cutoff='): opts['cutoff'] = arg[8:] 980 1170 elif arg.startswith('-random='): opts['seed'] = int(arg[8:]) 981 1171 elif arg.startswith('-title='): opts['title'] = arg[7:] 982 1172 elif arg.startswith('-data='): opts['datafile'] = arg[6:]  1173 elif arg.startswith('-calc='): opts['engine'] = arg[6:]  1174 elif arg.startswith('-neval='): opts['evals'] = arg[7:] 983 1175 elif arg == '-random': opts['seed'] = np.random.randint(1000000) 984 1176 elif arg == '-preset': opts['seed'] = -1 … …  993 1185 elif arg == '-rel': opts['rel_err'] = True 994 1186 elif arg == '-abs': opts['rel_err'] = False 995  elif arg == '-half': engines.append(arg[1:])Â996  elif arg == '-fast': engines.append(arg[1:])Â997  elif arg == '-single': engines.append(arg[1:])Â998  elif arg == '-double': engines.append(arg[1:])Â999  elif arg == '-single!': engines.append(arg[1:])Â1000  elif arg == '-double!': engines.append(arg[1:])Â1001  elif arg == '-quad!': engines.append(arg[1:])Â1002  elif arg == '-sasview': engines.append(arg[1:]) 1187 elif arg == '-half': opts['engine'] = 'half'  1188 elif arg == '-fast': opts['engine'] = 'fast'  1189 elif arg == '-single': opts['engine'] = 'single'  1190 elif arg == '-double': opts['engine'] = 'double'  1191 elif arg == '-single!': opts['engine'] = 'single!'  1192 elif arg == '-double!': opts['engine'] = 'double!'  1193 elif arg == '-quad!': opts['engine'] = 'quad!'  1194 elif arg == '-sasview': opts['engine'] = 'sasview' 1003 1195 elif arg == '-edit': opts['explore'] = True 1004 1196 elif arg == '-demo': opts['use_demo'] = True 1005  elif arg == '-default':  1197 elif arg == '-default': opts['use_demo'] = False 1006 1198 elif arg == '-html': opts['html'] = True 1007 1199 elif arg == '-help': opts['html'] = True 1008 1200 # pylint: enable=bad-whitespace 1009 1201  1010  if MODEL_SPLIT in name: 1011  name, name2 = name.split(MODEL_SPLIT, 2)  1202 # Magnetism forces 2D for now  1203 if opts['magnetic']:  1204 opts['is2d'] = True  1205   1206 # Force random if sets is used  1207 if opts['sets'] >= 1 and opts['seed'] < 0:  1208 opts['seed'] = np.random.randint(1000000)  1209 if opts['sets'] == 0:  1210 opts['sets'] = 1  1211   1212 # Create the computational engines  1213 if opts['qmin'] is None:  1214 opts['qmin'] = 0.001*opts['qmax']  1215 if opts['datafile'] is not None:  1216 data = load_data(os.path.expanduser(opts['datafile'])) 1012 1217 else: 1013  name2 = name  1218 data, _ = make_data(opts)  1219   1220 comparison = any(PAR_SPLIT in v for v in values)  1221 if PAR_SPLIT in name:  1222 names = name.split(PAR_SPLIT, 2)  1223 comparison = True  1224 else:  1225 names = [name]*2 1014 1226 try: 1015  model_info = core.load_model_info(name) 1016  model_info2 = core.load_model_info(name2) if name2 != name else model_info  1227 model_info = [core.load_model_info(k) for k in names] 1017 1228 except ImportError as exc: 1018 1229 print(str(exc)) 1019 1230 print("Could not find model; use one of:\n " + models) 1020 1231 return None  1232   1233 if PAR_SPLIT in opts['engine']:  1234 engine_types = opts['engine'].split(PAR_SPLIT, 2)  1235 comparison = True  1236 else:  1237 engine_types = [opts['engine']]*2  1238   1239 if PAR_SPLIT in opts['evals']:  1240 evals = [int(k) for k in opts['evals'].split(PAR_SPLIT, 2)]  1241 comparison = True  1242 else:  1243 evals = [int(opts['evals'])]*2  1244   1245 if PAR_SPLIT in opts['cutoff']:  1246 cutoff = [float(k) for k in opts['cutoff'].split(PAR_SPLIT, 2)]  1247 comparison = True  1248 else:  1249 cutoff = [float(opts['cutoff'])]*2  1250   1251 base = make_engine(model_info[0], data, engine_types[0], cutoff[0])  1252 if comparison:  1253 comp = make_engine(model_info[1], data, engine_types[1], cutoff[1])  1254 else:  1255 comp = None  1256   1257 # pylint: disable=bad-whitespace  1258 # Remember it all  1259 opts.update({  1260 'data' : data,  1261 'name' : names,  1262 'def' : model_info,  1263 'count' : evals,  1264 'engines' : [base, comp],  1265 'values' : values,  1266 })  1267 # pylint: enable=bad-whitespace  1268   1269 return opts  1270   1271 def parse_pars(opts):  1272 model_info, model_info2 = opts['def'] 1021 1273  1022 1274 # Get demo parameters from model definition, or use default parameters … …  1028 1280 #pars.update(set_pars) # set value before random to control range 1029 1281 if opts['seed'] > -1: 1030  pars = randomize_pars(model_info, pars , seed=opts['seed']) 1282 pars = randomize_pars(model_info, pars) 1031 1283 if model_info != model_info2: 1032  pars2 = randomize_pars(model_info2, pars2 , seed=opts['seed']) 1284 pars2 = randomize_pars(model_info2, pars2) 1033 1285 # Share values for parameters with the same name 1034 1286 for k, v in pars.items(): … …  1039 1291 constrain_pars(model_info, pars) 1040 1292 constrain_pars(model_info2, pars2) 1041  print("Randomize using -random=%i"%opts['seed']) 1042  if opts['mono']: 1043  pars = suppress_pd(pars) 1044  pars2 = suppress_pd(pars2) 1045  if not opts['magnetic']: 1046  pars = suppress_magnetism(pars) 1047  pars2 = suppress_magnetism(pars2)  1293 pars = suppress_pd(pars, opts['mono'])  1294 pars2 = suppress_pd(pars2, opts['mono'])  1295 pars = suppress_magnetism(pars, not opts['magnetic'])  1296 pars2 = suppress_magnetism(pars2, not opts['magnetic']) 1048 1297  1049 1298 # Fill in parameters given on the command line 1050 1299 presets = {} 1051 1300 presets2 = {} 1052  for arg in values: 1301 for arg in opts['values']: 1053 1302 k, v = arg.split('=', 1) 1054 1303 if k not in pars and k not in pars2: … …  1057 1306 print("%r invalid; parameters are: %s"%(k, ", ".join(sorted(s)))) 1058 1307 return None 1059  v1, v2 = v.split( MODEL_SPLIT, 2) if MODEL_SPLIT in v else (v,v) 1308 v1, v2 = v.split(PAR_SPLIT, 2) if PAR_SPLIT in v else (v,v) 1060 1309 if v1 and k in pars: 1061 1310 presets[k] = float(v1) if isnumber(v1) else v1 … …  1075 1324 context['np'] = np 1076 1325 context.update(pars) 1077  context.update((k, v) for k,v in presets.items() if isinstance(v, float)) 1326 context.update((k, v) for k, v in presets.items() if isinstance(v, float)) 1078 1327 for k, v in presets.items(): 1079 1328 if not isinstance(v, float) and not k.endswith('_type'): 1080 1329 presets[k] = eval(v, context) 1081 1330 context.update(presets) 1082  context.update((k, v) for k,v in presets2.items() if isinstance(v, float)) 1331 context.update((k, v) for k, v in presets2.items() if isinstance(v, float)) 1083 1332 for k, v in presets2.items(): 1084 1333 if not isinstance(v, float) and not k.endswith('_type'): … …  1090 1339 #import pprint; pprint.pprint(model_info) 1091 1340  1092  same_model = name == name2 and pars == parsÂ1093  if len(engines) == 0:Â1094  if same_model:Â1095  engines.extend(['single', 'double'])Â1096  else:Â1097  engines.extend(['single', 'single'])Â1098  elif len(engines) == 1:Â1099  if not same_model:Â1100  engines.append(engines[0])Â1101  elif engines[0] == 'double':Â1102  engines.append('single')Â1103  else:Â1104  engines.append('double')Â1105  elif len(engines) > 2:Â1106  del engines[2:]Â1107  Â1108  use_sasview = any(engine == 'sasview' and count > 0Â1109  for engine, count in zip(engines, [n1, n2]))Â1110  if use_sasview:Â1111  constrain_new_to_old(model_info, pars)Â1112  constrain_new_to_old(model_info2, pars2)Â1113  Â1114 1341 if opts['show_pars']: 1115  if not same_model: 1342 if model_info.name != model_info2.name or pars != pars2: 1116 1343 print("==== %s ====="%model_info.name) 1117 1344 print(str(parlist(model_info, pars, opts['is2d']))) … …  1121 1348 print(str(parlist(model_info, pars, opts['is2d']))) 1122 1349  1123  # Create the computational engines 1124  if opts['datafile'] is not None: 1125  data = load_data(os.path.expanduser(opts['datafile'])) 1126  else: 1127  data, _ = make_data(opts) 1128  if n1: 1129  base = make_engine(model_info, data, engines[0], opts['cutoff']) 1130  else: 1131  base = None 1132  if n2: 1133  comp = make_engine(model_info2, data, engines[1], opts['cutoff']) 1134  else: 1135  comp = None 1136   1137  # pylint: disable=bad-whitespace 1138  # Remember it all 1139  opts.update({ 1140  'data' : data, 1141  'name' : [name, name2], 1142  'def' : [model_info, model_info2], 1143  'count' : [n1, n2], 1144  'presets' : [presets, presets2], 1145  'pars' : [pars, pars2], 1146  'engines' : [base, comp], 1147  }) 1148  # pylint: enable=bad-whitespace 1149   1150  return opts  1350 return pars, pars2 1151 1351  1152 1352 def show_docs(opts): … …  1180 1380 model = Explore(opts) 1181 1381 problem = FitProblem(model) 1182  frame = AppFrame(parent=None, title="explore", size=(1000,700)) 1183  if not is_mac: frame.Show()  1382 frame = AppFrame(parent=None, title="explore", size=(1000, 700))  1383 if not is_mac:  1384 frame.Show() 1184 1385 frame.panel.set_model(model=problem) 1185 1386 frame.panel.Layout() … …  1191 1392 if is_mac: frame.Show() 1192 1393 # If running withing an app, start the main loop 1193  if app: app.MainLoop()  1394 if app:  1395 app.MainLoop() 1194 1396  1195 1397 class Explore(object): … …  1206 1408 config_matplotlib() 1207 1409 self.opts = opts  1410 opts['pars'] = list(opts['pars']) 1208 1411 p1, p2 = opts['pars'] 1209 1412 m1, m2 = opts['def'] … …  1226 1429 self.starting_values = dict((k, v.value) for k, v in pars.items()) 1227 1430 self.pd_types = pd_types 1228  self.limits = None 1431 self.limits = np.Inf, -np.Inf 1229 1432  1230 1433 def revert_values(self): … …  1283 1486 opts = parse_opts(argv) 1284 1487 if opts is not None:  1488 if opts['seed'] > -1:  1489 print("Randomize using -random=%i"%opts['seed'])  1490 np.random.seed(opts['seed']) 1285 1491 if opts['html']: 1286 1492 show_docs(opts) 1287 1493 elif opts['explore']:  1494 opts['pars'] = parse_pars(opts)  1495 if opts['pars'] is None:  1496 return 1288 1497 explore(opts) 1289 1498 else: -
sasmodels/conversion_table.py
r0c2da4b r505d0ad  11 11 account for that. 12 12  13  Usage: 14  <old_Sasview_version> : { 15  <new_model_name> : [ 16  <old_model_name> , 17  { 18  <new_param_name_1> : <old_param_name_1>, 19  ... 20  <new_param_name_n> : <old_param_name_n> 21  } 22  ] 23  }  13 Usage::  14   15 <old_Sasview_version> : {  16 <new_model_name> : [  17 <old_model_name> ,  18 {  19 <new_param_name_1> : <old_param_name_1>,  20 ...  21 <new_param_name_n> : <old_param_name_n>  22 }  23 ]  24 } 24 25  25 26 Any future parameter and model name changes can and should be given in this -
sasmodels/core.py
r2e66ef5 ra85a569  10 10  11 11 import os  12 import re 12 13 from os.path import basename, dirname, join as joinpath 13 14 from glob import glob … …  21 22 from . import kernelpy 22 23 from . import kerneldll  24 from . import custom 23 25  24 26 if os.environ.get("SAS_OPENCL", "").lower() == "none": … …  30 32 except Exception: 31 33 HAVE_OPENCL = False  34   35 CUSTOM_MODEL_PATH = os.environ.get('SAS_MODELPATH', "")  36 if CUSTOM_MODEL_PATH == "":  37 path = joinpath(os.path.expanduser("~"), ".sasmodels", "custom_models")  38 if not os.path.isdir(path):  39 os.makedirs(path)  40 CUSTOM_MODEL_PATH = path 32 41  33 42 try: … …  125 134 dtype=dtype, platform=platform) 126 135  127   128  def load_model_info(model_name):  136 def load_model_info(model_string): 129 137 # type: (str) -> modelinfo.ModelInfo 130 138 """ 131 139 Load a model definition given the model name. 132 140  133  *model_name* is the name of the model, or perhaps a model expression 134  such as sphere*hardsphere or sphere+cylinder.  141 *model_string* is the name of the model, or perhaps a model expression  142 such as sphere*cylinder or sphere+cylinder. Use '@' for a structure  143 factor product, e.g. sphere@hardsphere. Custom models can be specified by  144 prefixing the model name with 'custom.', e.g. 'custom.MyModel+sphere'. 135 145  136 146 This returns a handle to the module defining the model. This can be 137 147 used with functions in generate to build the docs or extract model info. 138 148 """ 139  parts = model_name.split('+') 140  if len(parts) > 1: 141  model_info_list = [load_model_info(p) for p in parts] 142  return mixture.make_mixture_info(model_info_list) 143   144  parts = model_name.split('*') 145  if len(parts) > 1: 146  if len(parts) > 2: 147  raise ValueError("use P*S to apply structure factor S to model P") 148  P_info, Q_info = [load_model_info(p) for p in parts]  149 if '@' in model_string:  150 parts = model_string.split('@')  151 if len(parts) != 2:  152 raise ValueError("Use P@S to apply a structure factor S to model P")  153 P_info, Q_info = [load_model_info(part) for part in parts] 149 154 return product.make_product_info(P_info, Q_info) 150 155  151  kernel_module = generate.load_kernel_module(model_name) 152  return modelinfo.make_model_info(kernel_module)  156 product_parts = []  157 addition_parts = []  158   159 addition_parts_names = model_string.split('+')  160 if len(addition_parts_names) >= 2:  161 addition_parts = [load_model_info(part) for part in addition_parts_names]  162 elif len(addition_parts_names) == 1:  163 product_parts_names = model_string.split('*')  164 if len(product_parts_names) >= 2:  165 product_parts = [load_model_info(part) for part in product_parts_names]  166 elif len(product_parts_names) == 1:  167 if "custom." in product_parts_names[0]:  168 # Extract ModelName from "custom.ModelName"  169 pattern = "custom.([A-Za-z0-9_-]+)"  170 result = re.match(pattern, product_parts_names[0])  171 if result is None:  172 raise ValueError("Model name in invalid format: " + product_parts_names[0])  173 model_name = result.group(1)  174 # Use ModelName to find the path to the custom model file  175 model_path = joinpath(CUSTOM_MODEL_PATH, model_name + ".py")  176 if not os.path.isfile(model_path):  177 raise ValueError("The model file {} doesn't exist".format(model_path))  178 kernel_module = custom.load_custom_kernel_module(model_path)  179 return modelinfo.make_model_info(kernel_module)  180 # Model is a core model  181 kernel_module = generate.load_kernel_module(product_parts_names[0])  182 return modelinfo.make_model_info(kernel_module)  183   184 model = None  185 if len(product_parts) > 1:  186 model = mixture.make_mixture_info(product_parts, operation='*')  187 if len(addition_parts) > 1:  188 if model is not None:  189 addition_parts.append(model)  190 model = mixture.make_mixture_info(addition_parts, operation='+')  191 return model 153 192  154 193  … …  269 308  270 309 # Convert dtype string to numpy dtype. 271  if dtype is None : 310 if dtype is None or dtype == "default": 272 311 numpy_dtype = (generate.F32 if platform == "ocl" and model_info.single 273 312 else generate.F64) -
sasmodels/custom/__init__.py
r3c852d4 r0f48f1e  31 31 if fullname in sys.modules: 32 32 del sys.modules[fullname]  33 if path.endswith(".py") and os.path.exists(path) and os.path.exists(path+"c"):  34 # remove automatic pyc file before loading a py file  35 os.unlink(path+"c") 33 36 module = imp.load_source(fullname, os.path.expanduser(path)) 34  #os.unlink(path+"c") # remove the automatic pyc fileÂ35 37 return module 36 38  -
sasmodels/data.py
r630156b rba7302a  44 44 Data = Union["Data1D", "Data2D", "SesansData"] 45 45  46  def load_data(filename ): 46 def load_data(filename, index=0): 47 47 # type: (str) -> Data 48 48 """ … …  55 55 filename, indexstr = filename[:-1].split('[') 56 56 index = int(indexstr) 57  else:Â58  index = NoneÂ59 57 datasets = loader.load(filename) 60  if datasets is None: 58 if not datasets: # None or [] 61 59 raise IOError("Data %r could not be loaded" % filename) 62 60 if not isinstance(datasets, list): 63 61 datasets = [datasets] 64  if index is None and len(datasets) > 1: 65  raise ValueError("Need to specify filename[index] for multipart data") 66  data = datasets[index if index is not None else 0] 67  if hasattr(data, 'x'): 68  data.qmin, data.qmax = data.x.min(), data.x.max() 69  data.mask = (np.isnan(data.y) if data.y is not None 70  else np.zeros_like(data.x, dtype='bool')) 71  elif hasattr(data, 'qx_data'): 72  data.mask = ~data.mask 73  return data  62 for data in datasets:  63 if hasattr(data, 'x'):  64 data.qmin, data.qmax = data.x.min(), data.x.max()  65 data.mask = (np.isnan(data.y) if data.y is not None  66 else np.zeros_like(data.x, dtype='bool'))  67 elif hasattr(data, 'qx_data'):  68 data.mask = ~data.mask  69 return datasets[index] if index != 'all' else datasets 74 70  75 71  … …  438 434  439 435 scale = data.x**4 if view == 'q4' else 1.0  436 xscale = yscale = 'linear' if view == 'linear' else 'log' 440 437  441 438 if use_data or use_theory: … …  470 467 plt.ylim(*limits) 471 468  472  plt.xscale('linear' if not some_present or non_positive_x 473  else view if view is not None 474  else 'log') 475  plt.yscale('linear' 476  if view == 'q4' or not some_present or not all_positive 477  else view if view is not None 478  else 'log')  469   470 xscale = ('linear' if not some_present or non_positive_x  471 else view if view is not None  472 else 'log')  473 yscale = ('linear'  474 if view == 'q4' or not some_present or not all_positive  475 else view if view is not None  476 else 'log')  477 plt.xscale(xscale) 479 478 plt.xlabel("$q$/A$^{-1}$")  479 plt.yscale(yscale) 480 480 plt.ylabel('$I(q)$') 481 481 title = ("data and model" if use_theory and use_data … …  505 505 plt.xlabel("$q$/A$^{-1}$") 506 506 plt.ylabel('residuals') 507  plt.xscale('linear')Â508 507 plt.title('(model - Iq)/dIq')  508 plt.xscale(xscale)  509 plt.yscale('linear') 509 510  510 511  -
sasmodels/direct_model.py
ra769b54 rd1ff3a5  293 293 self.Iq = y 294 294 if self.data_type in ('Iq', 'Iq-oriented'):  295 if self._data.y is None:  296 self._data.y = np.empty(len(self._data.x), 'd')  297 if self._data.dy is None:  298 self._data.dy = np.empty(len(self._data.x), 'd') 295 299 self._data.dy[self.index] = dy 296 300 self._data.y[self.index] = y 297 301 elif self.data_type == 'Iqxy':  302 if self._data.data is None:  303 self._data.data = np.empty_like(self._data.qx_data, 'd')  304 if self._data.err_data is None:  305 self._data.err_data = np.empty_like(self._data.qx_data, 'd') 298 306 self._data.data[self.index] = y  307 self._data.err_data[self.index] = dy 299 308 elif self.data_type == 'sesans':  309 if self._data.y is None:  310 self._data.y = np.empty(len(self._data.x), 'd') 300 311 self._data.y[self.index] = y 301 312 else: … …  315 326 # TODO: extend plotting of calculate Iq to other measurement types 316 327 # TODO: refactor so we don't store the result in the model 317  self.Iq_calc = None 328 self.Iq_calc = Iq_calc 318 329 if self.data_type == 'sesans': 319 330 Iq_mono = (call_kernel(self._kernel_mono, pars, mono=True) -
sasmodels/generate.py
rbb4b509 r6db17bd  211 211 # Conversion from units defined in the parameter table for each model 212 212 # to units displayed in the sphinx documentation.  213 # This section associates the unit with the macro to use to produce the LaTex  214 # code. The macro itself needs to be defined in sasmodels/doc/rst_prolog.  215 #  216 # NOTE: there is an RST_PROLOG at the end of this file which is NOT  217 # used for the bundled documentation. Still as long as we are defining the macros  218 # in two places any new addition should define the macro in both places. 213 219 RST_UNITS = { 214 220 "Ang": "|Ang|", … …  216 222 "1/Ang^2": "|Ang^-2|", 217 223 "Ang^3": "|Ang^3|",  224 "Ang^2": "|Ang^2|", 218 225 "1e15/cm^3": "|1e15cm^3|", 219 226 "Ang^3/mol": "|Ang^3|/mol", … …  363 370 """ 364 371 # Note: need 0xffffffff&val to force an unsigned 32-bit number  372 try:  373 source = source.encode('utf8')  374 except AttributeError: # bytes has no encode attribute in python 3  375 pass 365 376 return "%08X"%(0xffffffff&crc32(source)) 366 377  … …  891 902 .. |cm^-3| replace:: cm\ :sup:`-3` 892 903 .. |sr^-1| replace:: sr\ :sup:`-1` 893  .. |P0| replace:: P\ :sub:`0`\Â894  Â895  .. |equiv| unicode:: U+2261Â896  .. |noteql| unicode:: U+2260Â897  .. |TM| unicode:: U+2122Â898 904  899 905 .. |cdot| unicode:: U+00B7 -
sasmodels/kernelpy.py
rdbfd471 r3b32f8d  9 9 from __future__ import division, print_function 10 10   11 import logging  12  11 13 import numpy as np # type: ignore 12 14 from numpy import pi, sin, cos #type: ignore … …  18 20 try: 19 21 from typing import Union, Callable 20  except : 22 except ImportError: 21 23 pass 22 24 else: … …  31 33 _create_default_functions(model_info) 32 34 self.info = model_info  35 self.dtype = np.dtype('d')  36 logging.info("load python model " + self.info.name) 33 37  34 38 def make_kernel(self, q_vectors): … …  236 240 # INVALID expression like the C models, but that is too expensive. 237 241 Iq = np.asarray(form(), 'd') 238  if np.isnan(Iq).any(): continue  242 if np.isnan(Iq).any():  243 continue 239 244  240 245 # update value and norm … …  300 305 default_Iqxy.vectorized = True 301 306 model_info.Iqxy = default_Iqxy 302   -
sasmodels/list_pars.py
r3330bb4 r72be531  12 12  13 13 import sys  14 import argparse 14 15  15 16 from .core import load_model_info, list_models 16 17 from .compare import columnize 17 18  18  def find_pars( ): 19 def find_pars(type=None): 19 20 """ 20 21 Find all parameters in all models. … …  26 27 model_info = load_model_info(name) 27 28 for p in model_info.parameters.kernel_parameters: 28  partable.setdefault(p.name, []) 29  partable[p.name].append(name)  29 if type is None or p.type == type:  30 partable.setdefault(p.name, [])  31 partable[p.name].append(name) 30 32 return partable 31 33  32  def list_pars(names_only=True ): 34 def list_pars(names_only=True, type=None): 33 35 """ 34 36 Print all parameters in all models. … …  37 39 occurs in. 38 40 """ 39  partable = find_pars( ) 41 partable = find_pars(type) 40 42 if names_only: 41 43 print(columnize(list(sorted(partable.keys())))) … …  48 50 Program to list the parameters used across all models. 49 51 """ 50  if len(sys.argv) == 2 and sys.argv[1] == '-v': 51  verbose = True 52  elif len(sys.argv) == 1: 53  verbose = False 54  else: 55  print(__doc__) 56  sys.exit(1) 57  list_pars(names_only=not verbose)  52 parser = argparse.ArgumentParser(  53 description="Find all parameters in all models",  54 )  55 parser.add_argument(  56 '-v', '--verbose',  57 action='store_true',  58 help="list models which use this argument")  59 parser.add_argument(  60 'type', default="any", nargs='?',  61 metavar="volume|orientation|sld|none|any",  62 choices=['volume', 'orientation', 'sld', None, 'any'],  63 type=lambda v: None if v == 'any' else '' if v == 'none' else v,  64 help="only list arguments of the given type")  65 args = parser.parse_args()  66   67 list_pars(names_only=not args.verbose, type=args.type) 58 68  59 69 if __name__ == "__main__": -
sasmodels/mixture.py
r6dc78e4 r0edb6c1  25 25 pass 26 26  27  def make_mixture_info(parts ): 27 def make_mixture_info(parts, operation='+'): 28 28 # type: (List[ModelInfo]) -> ModelInfo 29 29 """ 30 30 Create info block for mixture model. 31 31 """ 32  flatten = []Â33  for part in parts:Â34  if part.composition and part.composition[0] == 'mixture':Â35  flatten.extend(part.composition[1])Â36  else:Â37  flatten.append(part)Â38  parts = flattenÂ39  Â40 32 # Build new parameter list 41 33 combined_pars = [] 42  demo = {} 43  for k, part in enumerate(parts):  34   35 model_num = 0  36 all_parts = copy(parts)  37 is_flat = False  38 while not is_flat:  39 is_flat = True  40 for part in all_parts:  41 if part.composition and part.composition[0] == 'mixture' and \  42 len(part.composition[1]) > 1:  43 all_parts += part.composition[1]  44 all_parts.remove(part)  45 is_flat = False  46   47 # When creating a mixture model that is a sum of product models (ie (1*2)+(3*4))  48 # the parameters for models 1 & 2 will be prefixed with A & B respectively,  49 # but so will the parameters for models 3 & 4. We need to rename models 3 & 4  50 # so that they are prefixed with C & D to avoid overlap of parameter names.  51 used_prefixes = []  52 for part in parts:  53 i = 0  54 if part.composition and part.composition[0] == 'mixture':  55 npars_list = [info.parameters.npars for info in part.composition[1]]  56 for npars in npars_list:  57 # List of params of one of the constituent models of part  58 submodel_pars = part.parameters.kernel_parameters[i:i+npars]  59 # Prefix of the constituent model  60 prefix = submodel_pars[0].name[0]  61 if prefix not in used_prefixes: # Haven't seen this prefix so far  62 used_prefixes.append(prefix)  63 i += npars  64 continue  65 while prefix in used_prefixes:  66 # This prefix has been already used, so change it to the  67 # next letter that hasn't been used  68 prefix = chr(ord(prefix) + 1)  69 used_prefixes.append(prefix)  70 prefix += "_"  71 # Update the parameters of this constituent model to use the  72 # new prefix  73 for par in submodel_pars:  74 par.id = prefix + par.id[2:]  75 par.name = prefix + par.name[2:]  76 if par.length_control is not None:  77 par.length_control = prefix + par.length_control[2:]  78 i += npars  79   80 for part in parts: 44 81 # Parameter prefix per model, A_, B_, ... 45 82 # Note that prefix must also be applied to id and length_control 46 83 # to support vector parameters 47  prefix = chr(ord('A')+k) + '_' 48  scale = Parameter(prefix+'scale', default=1.0, 49  description="model intensity for " + part.name) 50  combined_pars.append(scale)  84 prefix = ''  85 if not part.composition:  86 # Model isn't a composition model, so it's parameters don't have a  87 # a prefix. Add the next available prefix  88 prefix = chr(ord('A')+len(used_prefixes))  89 used_prefixes.append(prefix)  90 prefix += '_'  91   92 if operation == '+':  93 # If model is a sum model, each constituent model gets its own scale parameter  94 scale_prefix = prefix  95 if prefix == '' and part.operation == '*':  96 # `part` is a composition product model. Find the prefixes of  97 # it's parameters to form a new prefix for the scale, eg:  98 # a model with A*B*C will have ABC_scale  99 sub_prefixes = []  100 for param in part.parameters.kernel_parameters:  101 # Prefix of constituent model  102 sub_prefix = param.id.split('_')[0]  103 if sub_prefix not in sub_prefixes:  104 sub_prefixes.append(sub_prefix)  105 # Concatenate sub_prefixes to form prefix for the scale  106 scale_prefix = ''.join(sub_prefixes) + '_'  107 scale = Parameter(scale_prefix + 'scale', default=1.0,  108 description="model intensity for " + part.name)  109 combined_pars.append(scale) 51 110 for p in part.parameters.kernel_parameters: 52 111 p = copy(p) … …  56 115 p.length_control = prefix + p.length_control 57 116 combined_pars.append(p) 58  demo.update((prefix+k, v) for k, v in part.demo.items()Â59  if k != "background")Â60  #print("pars",combined_pars)Â61 117 parameters = ParameterTable(combined_pars) 62 118 parameters.max_pd = sum(part.parameters.max_pd for part in parts) 63 119   120 def random():  121 combined_pars = {}  122 for k, part in enumerate(parts):  123 prefix = chr(ord('A')+k) + '_'  124 pars = part.random()  125 combined_pars.update((prefix+k, v) for k, v in pars.items())  126 return combined_pars  127  64 128 model_info = ModelInfo() 65  model_info.id = '+'.join(part.id for part in parts) 66  model_info.name = ' + '.join(part.name for part in parts)  129 model_info.id = operation.join(part.id for part in parts)  130 model_info.operation = operation  131 model_info.name = '(' + operation.join(part.name for part in parts) + ')' 67 132 model_info.filename = None 68 133 model_info.title = 'Mixture model with ' + model_info.name … …  71 136 model_info.category = "custom" 72 137 model_info.parameters = parameters  138 model_info.random = random 73 139 #model_info.single = any(part['single'] for part in parts) 74 140 model_info.structure_factor = False … …  79 145 # Remember the component info blocks so we can build the model 80 146 model_info.composition = ('mixture', parts) 81  model_info.demo = demoÂ82 147 return model_info 83 148  … …  88 153 self.info = model_info 89 154 self.parts = parts  155 self.dtype = parts[0].dtype 90 156  91 157 def make_kernel(self, q_vectors): … …  116 182 self.kernels = kernels 117 183 self.dtype = self.kernels[0].dtype  184 self.operation = model_info.operation 118 185 self.results = [] # type: List[np.ndarray] 119 186  … …  124 191 # remember the parts for plotting later 125 192 self.results = [] # type: List[np.ndarray] 126  offset = 2 # skip scale & backgroundÂ127 193 parts = MixtureParts(self.info, self.kernels, call_details, values) 128 194 for kernel, kernel_details, kernel_values in parts: 129 195 #print("calling kernel", kernel.info.name) 130 196 result = kernel(kernel_details, kernel_values, cutoff, magnetic) 131  #print(kernel.info.name, result) 132  total += result  197 result = np.array(result).astype(kernel.dtype)  198 # print(kernel.info.name, result)  199 if self.operation == '+':  200 total += result  201 elif self.operation == '*':  202 if np.all(total) == 0.0:  203 total = result  204 else:  205 total *= result 133 206 self.results.append(result) 134 207  … …  171 244  172 245 self.part_num += 1 173  self.par_index += info.parameters.npars + 1  246 self.par_index += info.parameters.npars  247 if self.model_info.operation == '+':  248 self.par_index += 1 # Account for each constituent model's scale param 174 249 self.mag_index += 3 * len(info.parameters.magnetism_index) 175 250  … …  182 257 # which includes the initial scale and background parameters. 183 258 # We want the index into the weight length/offset for each parameter. 184  # Exclude the initial scale and background, so subtract two, but each 185  # component has its own scale factor which we need to skip when 186  # constructing the details for the kernel, so add one, giving a 187  # net subtract one. 188  index = slice(par_index - 1, par_index - 1 + info.parameters.npars)  259 # Exclude the initial scale and background, so subtract two. If we're  260 # building an addition model, each component has its own scale factor  261 # which we need to skip when constructing the details for the kernel, so  262 # add one, giving a net subtract one.  263 diff = 1 if self.model_info.operation == '+' else 2  264 index = slice(par_index - diff, par_index - diff + info.parameters.npars) 189 265 length = full.length[index] 190 266 offset = full.offset[index] … …  196 272 def _part_values(self, info, par_index, mag_index): 197 273 # type: (ModelInfo, int, int) -> np.ndarray 198  #print(info.name, par_index, self.values[par_index:par_index + info.parameters.npars + 1]) 199  scale = self.values[par_index] 200  pars = self.values[par_index + 1:par_index + info.parameters.npars + 1]  274 # Set each constituent model's scale to 1 if this is a multiplication model  275 scale = self.values[par_index] if self.model_info.operation == '+' else 1.0  276 diff = 1 if self.model_info.operation == '+' else 0 # Skip scale if addition model  277 pars = self.values[par_index + diff:par_index + info.parameters.npars + diff] 201 278 nmagnetic = len(info.parameters.magnetism_index) 202 279 if nmagnetic: -
sasmodels/model_test.py
rbedb9b0 r65314f7  201 201 ({}, 'VR', None), 202 202 ] 203   204  tests = smoke_tests + self.info.tests  203 tests = smoke_tests  204 if self.info.tests is not None:  205 tests += self.info.tests 205 206 try: 206 207 model = build_model(self.info, dtype=self.dtype, … …  371 372 stream.writeln(traceback.format_exc()) 372 373 return 373  Â374 374 # Run the test suite 375 375 suite.run(result) -
sasmodels/modelinfo.py
r724257c ra85a569  467 467 if p.polydisperse and p.type not in ('orientation', 'magnetic')) 468 468 self.pd_2d = set(p.name for p in self.call_parameters if p.polydisperse)  469   470 def __getitem__(self, key):  471 # Find the parameter definition  472 for par in self.call_parameters:  473 if par.name == key:  474 break  475 else:  476 raise KeyError("unknown parameter %r"%key)  477 return par 469 478  470 479 def _set_vector_lengths(self): … …  718 727 models when the model is first called, not when the model is loaded. 719 728 """  729 if hasattr(kernel_module, "model_info"):  730 # Custom sum/multi models  731 return kernel_module.model_info 720 732 info = ModelInfo() 721 733 #print("make parameter table", kernel_module.parameters) … …  754 766 info.opencl = getattr(kernel_module, 'opencl', not callable(info.Iq)) 755 767 info.single = getattr(kernel_module, 'single', not callable(info.Iq))  768 info.random = getattr(kernel_module, 'random', None) 756 769  757 770 # multiplicity info -
sasmodels/models/adsorbed_layer.py
rb0c4271 r8f04da4  94 94 Iq.vectorized = True # Iq accepts an array of q values 95 95   96 def random():  97 # only care about the value of second_moment:  98 # curve = scale * e**(-second_moment^2 q^2)/q^2  99 # scale = 6 pi/100 (contrast/density*absorbed_amount)^2 * Vf/radius  100 # the remaining parameters can be randomly generated from zero to  101 # twice the default value as done by default in compare.py  102 import numpy as np  103 pars = dict(  104 scale=1,  105 second_moment=10**np.random.uniform(1, 3),  106 )  107 return pars  108  96 109 # unit test values taken from SasView 3.1.2 97 110 tests = [ -
sasmodels/models/barbell.py
r9802ab3 r31df0c9  115 115 source = ["lib/polevl.c", "lib/sas_J1.c", "lib/gauss76.c", "barbell.c"] 116 116   117 def random():  118 import numpy as np  119 # TODO: increase volume range once problem with bell radius is fixed  120 # The issue is that bell radii of more than about 200 fail at high q  121 V = 10**np.random.uniform(7, 9)  122 bar_volume = 10**np.random.uniform(-4, -1)*V  123 bell_volume = V - bar_volume  124 bell_radius = (bell_volume/6)**0.3333 # approximate  125 min_bar = bar_volume/np.pi/bell_radius**2  126 bar_length = 10**np.random.uniform(0, 3)*min_bar  127 bar_radius = np.sqrt(bar_volume/bar_length/np.pi)  128 if bar_radius > bell_radius:  129 bell_radius, bar_radius = bar_radius, bell_radius  130 pars = dict(  131 #background=0,  132 radius_bell=bell_radius,  133 radius=bar_radius,  134 length=bar_length,  135 )  136 return pars  137  117 138 # parameters for demo 118 139 demo = dict(scale=1, background=0, -
sasmodels/models/bcc_paracrystal.py
r69e1afc r8f04da4  81 81 .. figure:: img/parallelepiped_angle_definition.png 82 82  83  Orientation of the crystal with respect to the scattering plane, when  83 Orientation of the crystal with respect to the scattering plane, when 84 84 $\theta = \phi = 0$ the $c$ axis is along the beam direction (the $z$ axis). 85 85  … …  131 131 source = ["lib/sas_3j1x_x.c", "lib/gauss150.c", "lib/sphere_form.c", "bcc_paracrystal.c"] 132 132  133  # parameters for demo 134  demo = dict( 135  scale=1, background=0, 136  dnn=220, d_factor=0.06, sld=4, sld_solvent=1, 137  radius=40, 138  theta=60, phi=60, psi=60, 139  radius_pd=.2, radius_pd_n=2, 140  theta_pd=15, theta_pd_n=0, 141  phi_pd=15, phi_pd_n=0, 142  psi_pd=15, psi_pd_n=0,  133 def random():  134 import numpy as np  135 # Define lattice spacing as a multiple of the particle radius  136 # using the formulat a = 4 r/sqrt(3). Systems which are ordered  137 # are probably mostly filled, so use a distribution which goes from  138 # zero to one, but leaving 90% of them within 80% of the  139 # maximum bcc packing. Lattice distortion values are empirically  140 # useful between 0.01 and 0.7. Use an exponential distribution  141 # in this range 'cuz its easy.  142 radius = 10**np.random.uniform(1.3, 4)  143 d_factor = 10**np.random.uniform(-2, -0.7) # sigma_d in 0.01-0.7  144 dnn_fraction = np.random.beta(a=10, b=1)  145 dnn = radius*4/np.sqrt(3)/dnn_fraction  146 pars = dict(  147 #sld=1, sld_solvent=0, scale=1, background=1e-32,  148 dnn=dnn,  149 d_factor=d_factor,  150 radius=radius, 143 151 )  152 return pars  153  144 154 # april 6 2017, rkh add unit tests, NOT compared with any other calc method, assume correct! 145 155 # add 2d test later 146  q = 4.*pi/220. 156 q = 4.*pi/220. 147 157 tests = [ 148  [{ }, 149  [0.001, q, 0.215268], [1.46601394721, 2.85851284174, 0.00866710287078]], 150  [{'theta':20.0,'phi':30,'psi':40.0},(-0.017,0.035),2082.20264399 ], 151  [{'theta':20.0,'phi':30,'psi':40.0},(-0.081,0.011),0.436323144781 ]  158 [{}, [0.001, q, 0.215268], [1.46601394721, 2.85851284174, 0.00866710287078]],  159 [{'theta': 20.0, 'phi': 30, 'psi': 40.0}, (-0.017, 0.035), 2082.20264399],  160 [{'theta': 20.0, 'phi': 30, 'psi': 40.0}, (-0.081, 0.011), 0.436323144781], 152 161 ] -
sasmodels/models/be_polyelectrolyte.py
r3330bb4 r8f04da4  17 17 r_{0}^2 = \frac{1}{\alpha \sqrt{C_a} \left( b/\sqrt{48\pi L_b}\right)} 18 18  19  where  19 where 20 20  21 21 $K$ is the contrast factor for the polymer which is defined differently than in … …  28 28  29 29 a = b_p - (v_p/v_s) b_s 30    30  31 31 where $b_p$ and $b_s$ are sum of the scattering lengths of the atoms 32 32 constituting the monomer of the polymer and the sum of the scattering lengths … …  35 35  36 36 $L_b$ is the Bjerrum length(|Ang|) - **Note:** This parameter needs to be 37  kept constant for a given solvent and temperature!  37 kept constant for a given solvent and temperature! 38 38  39 39 $h$ is the virial parameter (|Ang^3|/mol) - **Note:** See [#Borue]_ for the … …  67 67 * **Author:** NIST IGOR/DANSE **Date:** pre 2010 68 68 * **Last Modified by:** Paul Kienzle **Date:** July 24, 2016 69  * **Last Reviewed by:** Paul Butler and Richard Heenan **Date:**  69 * **Last Reviewed by:** Paul Butler and Richard Heenan **Date:** 70 70 October 07, 2016 71 71 """ … …  139 139 Iq.vectorized = True # Iq accepts an array of q values 140 140   141 def random():  142 import numpy as np  143 # TODO: review random be_polyelectrolyte model generation  144 pars = dict(  145 scale=10000, #background=0,  146 #polymer_concentration=0.7,  147 polymer_concentration=np.random.beta(5, 3), # around 70%  148 #salt_concentration=0.0,  149 # keep salt concentration extremely low  150 # and use explicit molar to match polymer concentration  151 salt_concentration=np.random.beta(1, 100)*6.022136e-4,  152 #contrast_factor=10.0,  153 contrast_fact=np.random.uniform(1, 100),  154 #bjerrum_length=7.1,  155 bjerrum_length=np.random.uniform(1, 10),  156 #virial_param=12.0,  157 virial_param=np.random.uniform(-1000, 30),  158 #monomer_length=10.0,  159 monomer_length=10.0**(4*np.random.beta(1.5, 3)),  160 #ionization_degree=0.05,  161 ionization_degree=np.random.beta(1.5, 4),  162 )  163 return pars 141 164  142 165 demo = dict(scale=1, background=0.1, -
sasmodels/models/binary_hard_sphere.py
r925ad6e r30b60d2  23 23 :nowrap: 24 24  25  \begin{align } 25 \begin{align*} 26 26 x &= \frac{(\phi_2 / \phi)\alpha^3}{(1-(\phi_2/\phi) + (\phi_2/\phi) 27 27 \alpha^3)} \\ 28 28 \phi &= \phi_1 + \phi_2 = \text{total volume fraction} \\ 29 29 \alpha &= R_1/R_2 = \text{size ratio} 30  \end{align } 30 \end{align*} 31 31  32 32 The 2D scattering intensity is the same as 1D, regardless of the orientation of … …  110 110 source = ["lib/sas_3j1x_x.c", "binary_hard_sphere.c"] 111 111   112 def random():  113 import numpy as np  114 # TODO: binary_hard_sphere fails at low qr  115 radius_lg = 10**np.random.uniform(2, 4.7)  116 radius_sm = 10**np.random.uniform(2, 4.7)  117 volfraction_lg = 10**np.random.uniform(-3, -0.3)  118 volfraction_sm = 10**np.random.uniform(-3, -0.3)  119 # TODO: Get slightly different results if large and small are swapped  120 # modify the model so it doesn't care which is which  121 if radius_lg < radius_sm:  122 radius_lg, radius_sm = radius_sm, radius_lg  123 volfraction_lg, volfraction_sm = volfraction_sm, volfraction_lg  124 pars = dict(  125 radius_lg=radius_lg,  126 radius_sm=radius_sm,  127 volfraction_lg=volfraction_lg,  128 volfraction_sm=volfraction_sm,  129 )  130 return pars  131  112 132 # parameters for demo and documentation 113 133 demo = dict(sld_lg=3.5, sld_sm=0.5, sld_solvent=6.36, -
sasmodels/models/broad_peak.py
r43fe34b r0bdddc2  94 94 Iq.vectorized = True # Iq accepts an array of q values 95 95   96 def random():  97 import numpy as np  98 pars = dict(  99 scale=1,  100 porod_scale=10**np.random.uniform(-8, -5),  101 porod_exp=np.random.uniform(1, 6),  102 lorentz_scale=10**np.random.uniform(0.3, 6),  103 lorentz_length=10**np.random.uniform(0, 2),  104 peak_pos=10**np.random.uniform(-3, -1),  105 lorentz_exp=np.random.uniform(1, 4),  106 )  107 pars['lorentz_length'] /= pars['peak_pos']  108 pars['lorentz_scale'] *= pars['porod_scale'] / pars['peak_pos']**pars['porod_exp']  109 #pars['porod_scale'] = 0.  110 return pars  111  96 112 demo = dict(scale=1, background=0, 97 113 porod_scale=1.0e-05, porod_exp=3, -
sasmodels/models/capped_cylinder.py
r9802ab3 r31df0c9  80 80  81 81 .. [#] H Kaya, *J. Appl. Cryst.*, 37 (2004) 223-230 82  .. [#] H Kaya and N-R deSouza, *J. Appl. Cryst.*, 37 (2004) 508-509 (addenda  82 .. [#] H Kaya and N-R deSouza, *J. Appl. Cryst.*, 37 (2004) 508-509 (addenda 83 83 and errata) 84 84  … …  136 136 source = ["lib/polevl.c", "lib/sas_J1.c", "lib/gauss76.c", "capped_cylinder.c"] 137 137   138 def random():  139 import numpy as np  140 # TODO: increase volume range once problem with bell radius is fixed  141 # The issue is that bell radii of more than about 200 fail at high q  142 V = 10**np.random.uniform(7, 9)  143 bar_volume = 10**np.random.uniform(-4, -1)*V  144 bell_volume = V - bar_volume  145 bell_radius = (bell_volume/6)**0.3333 # approximate  146 min_bar = bar_volume/np.pi/bell_radius**2  147 bar_length = 10**np.random.uniform(0, 3)*min_bar  148 bar_radius = np.sqrt(bar_volume/bar_length/np.pi)  149 if bar_radius > bell_radius:  150 bell_radius, bar_radius = bar_radius, bell_radius  151 pars = dict(  152 #background=0,  153 radius_cap=bell_radius,  154 radius=bar_radius,  155 length=bar_length,  156 )  157 return pars  158   159  138 160 demo = dict(scale=1, background=0, 139 161 sld=6, sld_solvent=1, -
sasmodels/models/core_multi_shell.py
r5a0b3d7 r1511c37c  70 70 sld_shell: the SLD of the shell# 71 71 A_shell#: the coefficient in the exponential function 72   73    72   73  74 74 scale: 1.0 if data is on absolute scale 75 75 volfraction: volume fraction of spheres … …  102 102  103 103 source = ["lib/sas_3j1x_x.c", "core_multi_shell.c"]  104   105 def random():  106 import numpy as np  107 num_shells = np.minimum(np.random.poisson(3)+1, 10)  108 total_radius = 10**np.random.uniform(1.7, 4)  109 thickness = np.random.exponential(size=num_shells+1)  110 thickness *= total_radius/np.sum(thickness)  111 pars = dict(  112 #background=0,  113 n=num_shells,  114 radius=thickness[0],  115 )  116 for k, v in enumerate(thickness[1:]):  117 pars['thickness%d'%(k+1)] = v  118 return pars 104 119  105 120 def profile(sld_core, radius, sld_solvent, n, sld, thickness): -
sasmodels/models/core_shell_bicelle.py
r9802ab3 r30b60d2  17 17 .. figure:: img/core_shell_bicelle_parameters.png 18 18  19  Cross section of cylindrical symmetry model used here. Users will have 20  to decide how to distribute "heads" and "tails" between the rim, face  19 Cross section of cylindrical symmetry model used here. Users will have  20 to decide how to distribute "heads" and "tails" between the rim, face 21 21 and core regions in order to estimate appropriate starting parameters. 22 22  … …  27 27 .. math:: 28 28  29  \rho(r) = 30  \begin{cases}  29 \rho(r) =  30 \begin{cases} 31 31 &\rho_c \text{ for } 0 \lt r \lt R; -L \lt z\lt L \\[1.5ex] 32 32 &\rho_f \text{ for } 0 \lt r \lt R; -(L+2t) \lt z\lt -L; … …  41 41  42 42 I(Q,\alpha) = \frac{\text{scale}}{V_t} \cdot 43  F(Q,\alpha)^2 .sin(\alpha) + \text{background} 43 F(Q,\alpha)^2 \cdot sin(\alpha) + \text{background} 44 44  45 45 where 46 46  47 47 .. math::  48 :nowrap: 48 49  49  \begin{align }Â50  F(Q,\alpha) = &\bigg[  50 \begin{align*}  51 F(Q,\alpha) = &\bigg[ 51 52 (\rho_c - \rho_f) V_c \frac{2J_1(QRsin \alpha)}{QRsin\alpha}\frac{sin(QLcos\alpha/2)}{Q(L/2)cos\alpha} \\ 52 53 &+(\rho_f - \rho_r) V_{c+f} \frac{2J_1(QRsin\alpha)}{QRsin\alpha}\frac{sin(Q(L/2+t_f)cos\alpha)}{Q(L/2+t_f)cos\alpha} \\ 53 54 &+(\rho_r - \rho_s) V_t \frac{2J_1(Q(R+t_r)sin\alpha)}{Q(R+t_r)sin\alpha}\frac{sin(Q(L/2+t_f)cos\alpha)}{Q(L/2+t_f)cos\alpha} 54 55 \bigg] 55  \end{align } 56 \end{align*} 56 57  57 58 where $V_t$ is the total volume of the bicelle, $V_c$ the volume of the core, … …  63 64 cylinders is then given by integrating over all possible $\theta$ and $\phi$. 64 65  65  For oriented bicelles the *theta*, and *phi* orientation parameters will appear when fitting 2D data,  66 For oriented bicelles the *theta*, and *phi* orientation parameters will appear when fitting 2D data, 66 67 see the :ref:`cylinder` model for further information. 67 68 Our implementation of the scattering kernel and the 1D scattering intensity … …  96 97 title = "Circular cylinder with a core-shell scattering length density profile.." 97 98 description = """ 98  P(q,alpha)= (scale/Vs)*f(q)^(2) + bkg, where:  99 P(q,alpha)= (scale/Vs)*f(q)^(2) + bkg, where: 99 100 f(q)= Vt(sld_rim - sld_solvent)* sin[qLt.cos(alpha)/2] 100 101 /[qLt.cos(alpha)/2]*J1(qRout.sin(alpha)) … …  147 148 "core_shell_bicelle.c"] 148 149   150 def random():  151 import numpy as np  152 pars = dict(  153 radius=10**np.random.uniform(1.3, 3),  154 length=10**np.random.uniform(1.3, 4),  155 thick_rim=10**np.random.uniform(0, 1.7),  156 thick_face=10**np.random.uniform(0, 1.7),  157 )  158 return pars  159  149 160 demo = dict(scale=1, background=0, 150 161 radius=20.0, -
sasmodels/models/core_shell_bicelle_elliptical.py
r9802ab3 r30b60d2  6 6 core-shell scattering length density profile. Thus this is a variation 7 7 of the core-shell bicelle model, but with an elliptical cylinder for the core. 8  Outer shells on the rims and flat ends may be of different thicknesses and  8 Outer shells on the rims and flat ends may be of different thicknesses and 9 9 scattering length densities. The form factor is normalized by the total particle volume. 10 10  … …  17 17 .. figure:: img/core_shell_bicelle_parameters.png 18 18  19  Cross section of model used here. Users will have 20  to decide how to distribute "heads" and "tails" between the rim, face  19 Cross section of model used here. Users will have  20 to decide how to distribute "heads" and "tails" between the rim, face 21 21 and core regions in order to estimate appropriate starting parameters. 22 22  … …  27 27 .. math:: 28 28  29  \rho(r) = 30  \begin{cases}  29 \rho(r) =  30 \begin{cases} 31 31 &\rho_c \text{ for } 0 \lt r \lt R; -L \lt z\lt L \\[1.5ex] 32 32 &\rho_f \text{ for } 0 \lt r \lt R; -(L+2t) \lt z\lt -L; … …  42 42  43 43 I(Q,\alpha,\psi) = \frac{\text{scale}}{V_t} \cdot 44  F(Q,\alpha, \psi)^2 .sin(\alpha) + \text{background} 44 F(Q,\alpha, \psi)^2 \cdot sin(\alpha) + \text{background} 45 45  46  where a numerical integration of $F(Q,\alpha, \psi)^2 .sin(\alpha)$ is carried out over \alpha and \psi for: 46 where a numerical integration of $F(Q,\alpha, \psi)^2 \cdot sin(\alpha)$ is carried out over \alpha and \psi for: 47 47  48 48 .. math::  49 :nowrap: 49 50  50  \begin{align}Â51  F(Q,\alpha,\psi) = &\bigg[  51 \begin{align*}  52 F(Q,\alpha,\psi) = &\bigg[ 52 53 (\rho_c - \rho_f) V_c \frac{2J_1(QR'sin \alpha)}{QR'sin\alpha}\frac{sin(QLcos\alpha/2)}{Q(L/2)cos\alpha} \\ 53 54 &+(\rho_f - \rho_r) V_{c+f} \frac{2J_1(QR'sin\alpha)}{QR'sin\alpha}\frac{sin(Q(L/2+t_f)cos\alpha)}{Q(L/2+t_f)cos\alpha} \\ 54 55 &+(\rho_r - \rho_s) V_t \frac{2J_1(Q(R'+t_r)sin\alpha)}{Q(R'+t_r)sin\alpha}\frac{sin(Q(L/2+t_f)cos\alpha)}{Q(L/2+t_f)cos\alpha} 55 56 \bigg] 56  \end{align } 57 \end{align*} 57 58  58 59 where … …  61 62  62 63 R'=\frac{R}{\sqrt{2}}\sqrt{(1+X_{core}^{2}) + (1-X_{core}^{2})cos(\psi)} 63   64   65  and $V_t = \pi.(R+t_r)(Xcore.R+t_r)^2.(L+2.t_f)$ is the total volume of the bicelle, 66  $V_c = \pi.Xcore.R^2.L$ the volume of the core, $V_{c+f} = \pi.Xcore.R^2.(L+2.t_f)$  64   65   66 and $V_t = \pi.(R+t_r)(Xcore.R+t_r)^2.(L+2.t_f)$ is the total volume of the bicelle,  67 $V_c = \pi.Xcore.R^2.L$ the volume of the core, $V_{c+f} = \pi.Xcore.R^2.(L+2.t_f)$ 67 68 the volume of the core plus the volume of the faces, $R$ is the radius 68  of the core, $Xcore$ is the axial ratio of the core, $L$ the length of the core, 69  $t_f$ the thickness of the face, $t_r$ the thickness of the rim and $J_1$ the usual 70  first order bessel function. The core has radii $R$ and $Xcore.R$ so is circular, 71  as for the core_shell_bicelle model, for $Xcore$ =1. Note that you may need to 72  limit the range of $Xcore$, especially if using the Monte-Carlo algorithm, as  69 of the core, $Xcore$ is the axial ratio of the core, $L$ the length of the core,  70 $t_f$ the thickness of the face, $t_r$ the thickness of the rim and $J_1$ the usual  71 first order bessel function. The core has radii $R$ and $Xcore.R$ so is circular,  72 as for the core_shell_bicelle model, for $Xcore$ =1. Note that you may need to  73 limit the range of $Xcore$, especially if using the Monte-Carlo algorithm, as 73 74 setting radius to $R/Xcore$ and axial ratio to $1/Xcore$ gives an equivalent solution! 74 75  … …  76 77 bicelles is then given by integrating over all possible $\alpha$ and $\psi$. 77 78  78  For oriented bicelles the *theta*, *phi* and *psi* orientation parameters will appear when fitting 2D data,  79 For oriented bicelles the *theta*, *phi* and *psi* orientation parameters will appear when fitting 2D data, 79 80 see the :ref:`elliptical-cylinder` model for further information. 80 81  … …  82 83 .. figure:: img/elliptical_cylinder_angle_definition.png 83 84  84  Definition of the angles for the oriented core_shell_bicelle_elliptical particles.  85 Definition of the angles for the oriented core_shell_bicelle_elliptical particles. 85 86  86 87  … …  105 106 description = """ 106 107 core_shell_bicelle_elliptical 107  Elliptical cylinder core, optional shell on the two flat faces, and shell of 108  uniform thickness on its rim (extending around the end faces).  108 Elliptical cylinder core, optional shell on the two flat faces, and shell of  109 uniform thickness on its rim (extending around the end faces). 109 110 Please see full documentation for equations and further details. 110 111 Involves a double numerical integral around the ellipsoid diameter … …  136 137 "core_shell_bicelle_elliptical.c"] 137 138  138  demo = dict(scale=1, background=0, 139  radius=30.0, 140  x_core=3.0, 141  thick_rim=8.0, 142  thick_face=14.0, 143  length=50.0, 144  sld_core=4.0, 145  sld_face=7.0, 146  sld_rim=1.0, 147  sld_solvent=6.0, 148  theta=90, 149  phi=0, 150  psi=0)  139 def random():  140 import numpy as np  141 outer_major = 10**np.random.uniform(1, 4.7)  142 outer_minor = 10**np.random.uniform(1, 4.7)  143 # Use a distribution with a preference for thin shell or thin core,  144 # limited by the minimum radius. Avoid core,shell radii < 1  145 min_radius = min(outer_major, outer_minor)  146 thick_rim = np.random.beta(0.5, 0.5)*(min_radius-2) + 1  147 radius_major = outer_major - thick_rim  148 radius_minor = outer_minor - thick_rim  149 radius = radius_major  150 x_core = radius_minor/radius_major  151 outer_length = 10**np.random.uniform(1, 4.7)  152 # Caps should be a small percentage of the total length, but at least one  153 # angstrom long. Since outer length >= 10, the following will suffice  154 thick_face = 10**np.random.uniform(-np.log10(outer_length), -1)*outer_length  155 length = outer_length - thick_face  156 pars = dict(  157 radius=radius,  158 x_core=x_core,  159 thick_rim=thick_rim,  160 thick_face=thick_face,  161 length=length  162 )  163 return pars  164  151 165  152 166 q = 0.1 -
sasmodels/models/core_shell_cylinder.py
r9b79f29 r9f6823b  142 142 return whole, whole - core 143 143   144 def random():  145 import numpy as np  146 outer_radius = 10**np.random.uniform(1, 4.7)  147 # Use a distribution with a preference for thin shell or thin core  148 # Avoid core,shell radii < 1  149 radius = np.random.beta(0.5, 0.5)*(outer_radius-2) + 1  150 thickness = outer_radius - radius  151 length = np.random.uniform(1, 4.7)  152 pars = dict(  153 radius=radius,  154 thickness=thickness,  155 length=length,  156 )  157 return pars  158  144 159 demo = dict(scale=1, background=0, 145 160 sld_core=6, sld_shell=8, sld_solvent=1, -
sasmodels/models/core_shell_ellipsoid.py
r9802ab3 r30b60d2  25 25 ellipsoid. This may have some undesirable effects if the aspect ratio of the 26 26 ellipsoid is large (ie, if $X << 1$ or $X >> 1$ ), when the $S(q)$ 27  - which assumes spheres - will not in any case be valid. Generating a 28  custom product model will enable separate effective volume fraction and effective  27 - which assumes spheres - will not in any case be valid. Generating a  28 custom product model will enable separate effective volume fraction and effective 29 29 radius in the $S(q)$. 30 30  … …  44 44  45 45 .. math:: 46  \begin{align}   46 :nowrap:  47   48 \begin{align*} 47 49 F(q,\alpha) = &f(q,radius\_equat\_core,radius\_equat\_core.x\_core,\alpha) \\ 48 50 &+ f(q,radius\_equat\_core + thick\_shell,radius\_equat\_core.x\_core + thick\_shell.x\_polar\_shell,\alpha) 49  \end{align } 51 \end{align*} 50 52  51 53 where 52    54  53 55 .. math:: 54 56  … …  77 79 F^2(q)=\int_{0}^{\pi/2}{F^2(q,\alpha)\sin(\alpha)d\alpha} 78 80  79  For oriented ellipsoids the *theta*, *phi* and *psi* orientation parameters will appear when fitting 2D data,  81 For oriented ellipsoids the *theta*, *phi* and *psi* orientation parameters will appear when fitting 2D data, 80 82 see the :ref:`elliptical-cylinder` model for further information. 81 83  … …  151 153 return ellipsoid_ER(polar_outer, equat_outer) 152 154  153   154  demo = dict(scale=0.05, background=0.001, 155  radius_equat_core=20.0, 156  x_core=3.0, 157  thick_shell=30.0, 158  x_polar_shell=1.0, 159  sld_core=2.0, 160  sld_shell=1.0, 161  sld_solvent=6.3, 162  theta=0, 163  phi=0)  155 def random():  156 import numpy as np  157 V = 10**np.random.uniform(5, 12)  158 outer_polar = 10**np.random.uniform(1.3, 4)  159 outer_equatorial = np.sqrt(V/outer_polar) # ignore 4/3 pi  160 # Use a distribution with a preference for thin shell or thin core  161 # Avoid core,shell radii < 1  162 thickness_polar = np.random.beta(0.5, 0.5)*(outer_polar-2) + 1  163 thickness_equatorial = np.random.beta(0.5, 0.5)*(outer_equatorial-2) + 1  164 radius_polar = outer_polar - thickness_polar  165 radius_equatorial = outer_equatorial - thickness_equatorial  166 x_core = radius_polar/radius_equatorial  167 x_polar_shell = thickness_polar/thickness_equatorial  168 pars = dict(  169 #background=0, sld=0, sld_solvent=1,  170 radius_equat_core=radius_equatorial,  171 x_core=x_core,  172 thick_shell=thickness_equatorial,  173 x_polar_shell=x_polar_shell,  174 )  175 return pars 164 176  165 177 q = 0.1 -
sasmodels/models/core_shell_parallelepiped.c
r92dfe0c rc69d6d6  1  double form_volume(double length_a, double length_b, double length_c,  1 double form_volume(double length_a, double length_b, double length_c, 2 2 double thick_rim_a, double thick_rim_b, double thick_rim_c); 3 3 double Iq(double q, double core_sld, double arim_sld, double brim_sld, double crim_sld, … …  9 9 double thick_rim_c, double theta, double phi, double psi); 10 10  11  double form_volume(double length_a, double length_b, double length_c,  11 double form_volume(double length_a, double length_b, double length_c, 12 12 double thick_rim_a, double thick_rim_b, double thick_rim_c) 13 13 { 14 14 //return length_a * length_b * length_c; 15  return length_a * length_b * length_c + 16  2.0 * thick_rim_a * length_b * length_c +  15 return length_a * length_b * length_c +  16 2.0 * thick_rim_a * length_b * length_c + 17 17 2.0 * thick_rim_b * length_a * length_c + 18 18 2.0 * thick_rim_c * length_a * length_b; … …  34 34 // Code converted from functions CSPPKernel and CSParallelepiped in libCylinder.c_scaled 35 35 // Did not understand the code completely, it should be rechecked (Miguel Gonzalez) 36    36 //Code is rewritten,the code is compliant with Diva Singhs thesis now (Dirk Honecker)  37  37 38 const double mu = 0.5 * q * length_b; 38    39  39 40 //calculate volume before rescaling (in original code, but not used) 40  //double vol = form_volume(length_a, length_b, length_c, thick_rim_a, thick_rim_b, thick_rim_c); 41  //double vol = length_a * length_b * length_c + 42  // 2.0 * thick_rim_a * length_b * length_c +  41 //double vol = form_volume(length_a, length_b, length_c, thick_rim_a, thick_rim_b, thick_rim_c);  42 //double vol = length_a * length_b * length_c +  43 // 2.0 * thick_rim_a * length_b * length_c + 43 44 // 2.0 * thick_rim_b * length_a * length_c + 44 45 // 2.0 * thick_rim_c * length_a * length_b; 45    46  46 47 // Scale sides by B 47 48 const double a_scaled = length_a / length_b; 48 49 const double c_scaled = length_c / length_b; 49 50  50  // ta and tb correspond to the definitions in CSPPKernel, but they don't make sense to me (MG) 51  // the a_scaled in the definition of tb was present in CSPPKernel in libCylinder.c, 52  // while in cspkernel in csparallelepiped.cpp (used for the 2D), all the definitions 53  // for ta, tb, tc use also A + 2*rim_thickness (but not scaled by B!!!) 54  double ta = (a_scaled + 2.0*thick_rim_a)/length_b; 55  double tb = (a_scaled + 2.0*thick_rim_b)/length_b;  51 double ta = a_scaled + 2.0*thick_rim_a/length_b; // incorrect ta = (a_scaled + 2.0*thick_rim_a)/length_b;  52 double tb = 1+ 2.0*thick_rim_b/length_b; // incorrect tb = (a_scaled + 2.0*thick_rim_b)/length_b;  53 double tc = c_scaled + 2.0*thick_rim_c/length_b; //not present 56 54  57 55 double Vin = length_a * length_b * length_c; … …  62 60 double V1 = (2.0 * thick_rim_a * length_b * length_c); // incorrect V1 (aa*bb*cc+2*ta*bb*cc) 63 61 double V2 = (2.0 * length_a * thick_rim_b * length_c); // incorrect V2(aa*bb*cc+2*aa*tb*cc)  62 double V3 = (2.0 * length_a * length_b * thick_rim_c); //not present  63 double Vot = Vin + V1 + V2 + V3; 64 64  65 65 // Scale factors (note that drC is not used later) … …  67 67 const double drhoA = (arim_sld-solvent_sld); 68 68 const double drhoB = (brim_sld-solvent_sld); 69  //const double drC_Vot = (crim_sld-solvent_sld)*Vot;  69 const double drhoC = (crim_sld-solvent_sld); // incorrect const double drC_Vot = (crim_sld-solvent_sld)*Vot;  70  70 71  71 72 // Precompute scale factors for combining cross terms from the shape 72 73 const double scale23 = drhoA*V1; 73 74 const double scale14 = drhoB*V2; 74  const double scale12 = drho0*Vin - scale23 - scale14;  75 const double scale24 = drhoC*V3;  76 const double scale11 = drho0*Vin;  77 const double scale12 = drho0*Vin - scale23 - scale14 - scale24; 75 78  76 79 // outer integral (with gauss points), integration limits = 0, 1 … …  83 86 // inner integral (with gauss points), integration limits = 0, 1 84 87 double inner_total = 0.0;  88 double inner_total_crim = 0.0; 85 89 for(int j=0; j<76; j++) { 86 90 const double uu = 0.5 * ( Gauss76Z[j] + 1.0 ); … …  88 92 SINCOS(M_PI_2*uu, sin_uu, cos_uu); 89 93 const double si1 = sas_sinx_x(mu_proj * sin_uu * a_scaled); 90  const double si2 = sas_sinx_x(mu_proj * cos_uu ); 94 const double si2 = sas_sinx_x(mu_proj * cos_uu ); 91 95 const double si3 = sas_sinx_x(mu_proj * sin_uu * ta); 92 96 const double si4 = sas_sinx_x(mu_proj * cos_uu * tb); … …  94 98 // Expression in libCylinder.c (neither drC nor Vot are used) 95 99 const double form = scale12*si1*si2 + scale23*si2*si3 + scale14*si1*si4;  100 const double form_crim = scale11*si1*si2; 96 101  97  // To note also that in csparallelepiped.cpp, there is a function called 98  // cspkernel, which seems to make more sense and has the following comment: 99  // This expression is different from NIST/IGOR package (I strongly believe the IGOR is wrong!!!). 10/15/2010. 100  // tmp =( dr0*tmp1*tmp2*tmp3*Vin + drA*(tmpt1-tmp1)*tmp2*tmp3*V1+ drB*tmp1*(tmpt2-tmp2)*tmp3*V2 + drC*tmp1*tmp2*(tmpt3-tmp3)*V3)* 101  // ( dr0*tmp1*tmp2*tmp3*Vin + drA*(tmpt1-tmp1)*tmp2*tmp3*V1+ drB*tmp1*(tmpt2-tmp2)*tmp3*V2 + drC*tmp1*tmp2*(tmpt3-tmp3)*V3); // correct FF : square of sum of phase factors 102  // This is the function called by csparallelepiped_analytical_2D_scaled, 103  // while CSParallelepipedModel calls CSParallelepiped in libCylinder.c  104    102  105 103 // correct FF : sum of square of phase factors 106 104 inner_total += Gauss76Wt[j] * form * form;  105 inner_total_crim += Gauss76Wt[j] * form_crim * form_crim; 107 106 } 108 107 inner_total *= 0.5; 109    108 inner_total_crim *= 0.5; 110 109 // now sum up the outer integral 111 110 const double si = sas_sinx_x(mu * c_scaled * sigma); 112  outer_total += Gauss76Wt[i] * inner_total * si * si;  111 const double si_crim = sas_sinx_x(mu * tc * sigma);  112 outer_total += Gauss76Wt[i] * (inner_total * si * si + inner_total_crim * si_crim * si_crim); 113 113 } 114 114 outer_total *= 0.5; … …  154 154  155 155 // The definitions of ta, tb, tc are not the same as in the 1D case because there is no 156  // the scaling by B. The use of length_a for the 3 of them seems clearly a mistake to me, 157  // but for the moment I let it like this until understanding better the code.  156 // the scaling by B. 158 157 double ta = length_a + 2.0*thick_rim_a; 159  double tb = length_ a+ 2.0*thick_rim_b;Â160  double tc = length_ a+ 2.0*thick_rim_c; 158 double tb = length_b + 2.0*thick_rim_b;  159 double tc = length_c + 2.0*thick_rim_c; 161 160 //handle arg=0 separately, as sin(t)/t -> 1 as t->0 162 161 double siA = sas_sinx_x(0.5*q*length_a*xhat); … …  166 165 double siBt = sas_sinx_x(0.5*q*tb*yhat); 167 166 double siCt = sas_sinx_x(0.5*q*tc*zhat); 168    167  169 168  170 169 // f uses Vin, V1, V2, and V3 and it seems to have more sense than the value computed … …  173 172 + drA*(siAt-siA)*siB*siC*V1 174 173 + drB*siA*(siBt-siB)*siC*V2 175  + drC*siA*siB*(siCt *siCt-siC)*V3);Â176    174 + drC*siA*siB*(siCt-siC)*V3);  175  177 176 return 1.0e-4 * f * f; 178 177 } -
sasmodels/models/core_shell_parallelepiped.py
r9b79f29 rfa70e04  4 4  5 5 Calculates the form factor for a rectangular solid with a core-shell structure. 6  The thickness and the scattering length density of the shell or  6 The thickness and the scattering length density of the shell or 7 7 "rim" can be different on each (pair) of faces. However at this time 8 8 the 1D calculation does **NOT** actually calculate a c face rim despite the presence of … …  42 42  43 43 **meaning that there are "gaps" at the corners of the solid.** Again note that 44  $t_C = 0$ currently.  44 $t_C = 0$ currently. 45 45  46 46 The intensity calculated follows the :ref:`parallelepiped` model, with the … …  82 82 based on the the averaged effective radius $(=\sqrt{(A+2t_A)(B+2t_B)/\pi})$ 83 83 and length $(C+2t_C)$ values, after appropriately 84  sorting the three dimensions to give an oblate or prolate particle, to give an  84 sorting the three dimensions to give an oblate or prolate particle, to give an 85 85 effective radius, for $S(Q)$ when $P(Q) * S(Q)$ is applied. 86 86  … …  126 126 title = "Rectangular solid with a core-shell structure." 127 127 description = """ 128  P(q)=  128 P(q)= 129 129 """ 130 130 category = "shape:parallelepiped" … …  179 179 # VR defaults to 1.0 180 180   181 def random():  182 import numpy as np  183 outer = 10**np.random.uniform(1, 4.7, size=3)  184 thick = np.random.beta(0.5, 0.5, size=3)*(outer-2) + 1  185 length = outer - thick  186 pars = dict(  187 length_a=length[0],  188 length_b=length[1],  189 length_c=length[2],  190 thick_rim_a=thick[0],  191 thick_rim_b=thick[1],  192 thick_rim_c=thick[2],  193 )  194 return pars  195  181 196 # parameters for demo 182 197 demo = dict(scale=1, background=0.0, … …  196 211  197 212 # rkh 7/4/17 add random unit test for 2d, note make all params different, 2d values not tested against other codes or models 198  qx, qy = 0.2 * cos(pi/6.), 0.2 * sin(pi/6.) 199  tests = [[{}, 0.2, 0.533149288477], 200  [{}, [0.2], [0.533149288477]], 201  [{'theta':10.0, 'phi':20.0}, (qx, qy), 0.0853299803222], 202  [{'theta':10.0, 'phi':20.0}, [(qx, qy)], [0.0853299803222]], 203  ] 204  del qx, qy # not necessary to delete, but cleaner  213 if 0: # pak: model rewrite; need to update tests  214 qx, qy = 0.2 * cos(pi/6.), 0.2 * sin(pi/6.)  215 tests = [[{}, 0.2, 0.533149288477],  216 [{}, [0.2], [0.533149288477]],  217 [{'theta':10.0, 'phi':20.0}, (qx, qy), 0.0853299803222],  218 [{'theta':10.0, 'phi':20.0}, [(qx, qy)], [0.0853299803222]],  219 ]  220 del qx, qy # not necessary to delete, but cleaner -
sasmodels/models/core_shell_sphere.py
r925ad6e r9f6823b  57 57 title = "Form factor for a monodisperse spherical particle with particle with a core-shell structure." 58 58 description = """ 59  F^2(q) = 3/V_s [V_c (sld_core-sld_shell) (sin(q*radius)-q*radius*cos(q*radius))/(q*radius)^3  59 F^2(q) = 3/V_s [V_c (sld_core-sld_shell) (sin(q*radius)-q*radius*cos(q*radius))/(q*radius)^3 60 60 + V_s (sld_shell-sld_solvent) (sin(q*r_s)-q*r_s*cos(q*r_s))/(q*r_s)^3] 61 61  … …  99 99 return whole, whole - core 100 100   101 def random():  102 import numpy as np  103 outer_radius = 10**np.random.uniform(1.3, 4.3)  104 # Use a distribution with a preference for thin shell or thin core  105 # Avoid core,shell radii < 1  106 radius = np.random.beta(0.5, 0.5)*(outer_radius-2) + 1  107 thickness = outer_radius - radius  108 pars = dict(  109 radius=radius,  110 thickness=thickness,  111 )  112 return pars  113  101 114 tests = [ 102 115 [{'radius': 20.0, 'thickness': 10.0}, 'ER', 30.0], 103  104  105   116 # TODO: VR test suppressed until we sort out new product model  117 # and determine what to do with volume ratio.  118 #[{'radius': 20.0, 'thickness': 10.0}, 'VR', 0.703703704], 106 119  107  # The SasView test result was 0.00169, with a background of 0.001 108  [{'radius': 60.0, 'thickness': 10.0, 'sld_core': 1.0, 'sld_shell':2.0, 109  'sld_solvent':3.0, 'background':0.0}, 110  0.4, 0.000698838],  120 # The SasView test result was 0.00169, with a background of 0.001  121 [{'radius': 60.0, 'thickness': 10.0, 'sld_core': 1.0, 'sld_shell': 2.0,  122 'sld_solvent': 3.0, 'background': 0.0}, 0.4, 0.000698838], 111 123 ] -
sasmodels/models/cylinder.py
r9802ab3 r31df0c9  63 63 .. figure:: img/cylinder_angle_definition.png 64 64  65  Definition of the $\theta$ and $\phi$ orientation angles for a cylinder relative 66  to the beam line coordinates, plus an indication of their orientation distributions 67  which are described as rotations about each of the perpendicular axes $\delta_1$ and $\delta_2$  65 Definition of the $\theta$ and $\phi$ orientation angles for a cylinder relative  66 to the beam line coordinates, plus an indication of their orientation distributions  67 which are described as rotations about each of the perpendicular axes $\delta_1$ and $\delta_2$ 68 68 in the frame of the cylinder itself, which when $\theta = \phi = 0$ are parallel to the $Y$ and $X$ axes. 69 69  … …  72 72 Examples for oriented cylinders. 73 73  74  The $\theta$ and $\phi$ parameters to orient the cylinder only appear in the model when fitting 2d data.  74 The $\theta$ and $\phi$ parameters to orient the cylinder only appear in the model when fitting 2d data. 75 75 On introducing "Orientational Distribution" in the angles, "distribution of theta" and "distribution of phi" parameters will 76  appear. These are actually rotations about the axes $\delta_1$ and $\delta_2$ of the cylinder, which when $\theta = \phi = 0$ are parallel  76 appear. These are actually rotations about the axes $\delta_1$ and $\delta_2$ of the cylinder, which when $\theta = \phi = 0$ are parallel 77 77 to the $Y$ and $X$ axes of the instrument respectively. Some experimentation may be required to understand the 2d patterns fully. 78  (Earlier implementations had numerical integration issues in some circumstances when orientation distributions passed through 90 degrees, such 79  situations, with very broad distributions, should still be approached with care.)  78 (Earlier implementations had numerical integration issues in some circumstances when orientation distributions passed through 90 degrees, such  79 situations, with very broad distributions, should still be approached with care.) 80 80  81 81 Validation … …  150 150 return 0.5 * (ddd) ** (1. / 3.) 151 151   152 def random():  153 import numpy as np  154 V = 10**np.random.uniform(5, 12)  155 length = 10**np.random.uniform(-2, 2)*V**0.333  156 radius = np.sqrt(V/length/np.pi)  157 pars = dict(  158 #scale=1,  159 #background=0,  160 length=length,  161 radius=radius,  162 )  163 return pars  164   165  152 166 # parameters for demo 153 167 demo = dict(scale=1, background=0, -
sasmodels/models/dab.py
r4962519 r404ebbd  61 61 double numerator = cube(cor_length); 62 62 double denominator = square(1 + square(q*cor_length)); 63    63  64 64 return numerator / denominator ; 65 65 """  66   67 def random():  68 import numpy as np  69 pars = dict(  70 scale=10**np.random.uniform(1, 4),  71 cor_length=10**np.random.uniform(0.3, 3),  72 #background = 0,  73 )  74 pars['scale'] /= pars['cor_length']**3  75 return pars 66 76  67 77 # ER defaults to 1.0 -
sasmodels/models/ellipsoid.py
r9b79f29 r92708d8  2 2 # Note: model title and parameter table are inserted automatically 3 3 r""" 4  The form factor is normalized by the particle volume  4 The form factor is normalized by the particle volume 5 5  6 6 Definition … …  112 112 Plenum Press, New York, 1987. 113 113   114 A. Isihara. J. Chem. Phys. 18(1950) 1446-1449  115  114 116 Authorship and Verification 115 117 ---------------------------- … …  119 121 * **Last Modified by:** Paul Kienzle **Date:** March 22, 2017 120 122 """  123 from __future__ import division 121 124  122 125 from numpy import inf, sin, cos, pi … …  161 164 def ER(radius_polar, radius_equatorial): 162 165 import numpy as np 163  # see equation (26) in A.Isihara, J.Chem.Phys. 18(1950)1446-1449 166 # see equation (26) in A.Isihara, J.Chem.Phys. 18(1950)1446-1449 164 167 ee = np.empty_like(radius_polar) 165 168 idx = radius_polar > radius_equatorial … …  181 184 return 0.5 * ddd ** (1.0 / 3.0) 182 185   186 def random():  187 import numpy as np  188 V = 10**np.random.uniform(5, 12)  189 radius_polar = 10**np.random.uniform(1.3, 4)  190 radius_equatorial = np.sqrt(V/radius_polar) # ignore 4/3 pi  191 pars = dict(  192 #background=0, sld=0, sld_solvent=1,  193 radius_polar=radius_polar,  194 radius_equatorial=radius_equatorial,  195 )  196 return pars 183 197  184 198 demo = dict(scale=1, background=0, -
sasmodels/models/elliptical_cylinder.py
r9802ab3 rd9ec8f9  33 33  34 34 a = qr'\sin(\alpha) 35    35  36 36 b = q\frac{L}{2}\cos(\alpha) 37    37  38 38 r'=\frac{r_{minor}}{\sqrt{2}}\sqrt{(1+\nu^{2}) + (1-\nu^{2})cos(\psi)} 39 39  … …  57 57 define the axis of the cylinder using two angles $\theta$, $\phi$ and $\Psi$ 58 58 (see :ref:`cylinder orientation <cylinder-angle-definition>`). The angle 59  $\Psi$ is the rotational angle around its own long_c axis.  59 $\Psi$ is the rotational angle around its own long_c axis. 60 60  61 61 All angle parameters are valid and given only for 2D calculation; ie, an … …  72 72 detector plane, with $\Psi$ = 0. 73 73  74  The $\theta$ and $\phi$ parameters to orient the cylinder only appear in the model when fitting 2d data.  74 The $\theta$ and $\phi$ parameters to orient the cylinder only appear in the model when fitting 2d data. 75 75 On introducing "Orientational Distribution" in the angles, "distribution of theta" and "distribution of phi" parameters will 76  appear. These are actually rotations about the axes $\delta_1$ and $\delta_2$ of the cylinder, the $b$ and $a$ axes of the 77  cylinder cross section. (When $\theta = \phi = 0$ these are parallel to the $Y$ and $X$ axes of the instrument.) 78  The third orientation distribution, in $\psi$, is about the $c$ axis of the particle. Some experimentation may be required to 79  understand the 2d patterns fully. (Earlier implementations had numerical integration issues in some circumstances when orientation 80  distributions passed through 90 degrees, such situations, with very broad distributions, should still be approached with care.)  76 appear. These are actually rotations about the axes $\delta_1$ and $\delta_2$ of the cylinder, the $b$ and $a$ axes of the  77 cylinder cross section. (When $\theta = \phi = 0$ these are parallel to the $Y$ and $X$ axes of the instrument.)  78 The third orientation distribution, in $\psi$, is about the $c$ axis of the particle. Some experimentation may be required to  79 understand the 2d patterns fully. (Earlier implementations had numerical integration issues in some circumstances when orientation  80 distributions passed through 90 degrees, such situations, with very broad distributions, should still be approached with care.) 81 81  82 82 NB: The 2nd virial coefficient of the cylinder is calculated based on the … …  110 110  111 111 * **Author:** 112  * **Last Modified by:**  112 * **Last Modified by:** 113 113 * **Last Reviewed by:** Richard Heenan - corrected equation in docs **Date:** December 21, 2016 114 114  … …  156 156 + (length + radius) * (length + pi * radius)) 157 157 return 0.5 * (ddd) ** (1. / 3.)  158   159 def random():  160 import numpy as np  161 # V = pi * radius_major * radius_minor * length;  162 V = 10**np.random.uniform(3, 9)  163 length = 10**np.random.uniform(1, 3)  164 axis_ratio = 10**np.random.uniform(0, 2)  165 radius_minor = np.sqrt(V/length/axis_ratio)  166 Vf = 10**np.random.uniform(-4, -2)  167 pars = dict(  168 #background=0, sld=0, sld_solvent=1,  169 scale=1e9*Vf/V,  170 length=length,  171 radius_minor=radius_minor,  172 axis_ratio=axis_ratio,  173 )  174 return pars  175  158 176 q = 0.1 159 177 # april 6 2017, rkh added a 2d unit test, NOT READY YET pull #890 branch assume correct! -
sasmodels/models/fcc_paracrystal.py
r69e1afc r8f04da4  78 78 .. figure:: img/parallelepiped_angle_definition.png 79 79  80  Orientation of the crystal with respect to the scattering plane, when  80 Orientation of the crystal with respect to the scattering plane, when 81 81 $\theta = \phi = 0$ the $c$ axis is along the beam direction (the $z$ axis). 82 82  … …  119 119 source = ["lib/sas_3j1x_x.c", "lib/gauss150.c", "lib/sphere_form.c", "fcc_paracrystal.c"] 120 120  121  # parameters for demo 122  demo = dict(scale=1, background=0, 123  dnn=220, d_factor=0.06, sld=4, sld_solvent=1, 124  radius=40, 125  theta=60, phi=60, psi=60, 126  radius_pd=.2, radius_pd_n=0.2, 127  theta_pd=15, theta_pd_n=0, 128  phi_pd=15, phi_pd_n=0, 129  psi_pd=15, psi_pd_n=0, 130  )  121 def random():  122 import numpy as np  123 # copied from bcc_paracrystal  124 radius = 10**np.random.uniform(1.3, 4)  125 d_factor = 10**np.random.uniform(-2, -0.7) # sigma_d in 0.01-0.7  126 dnn_fraction = np.random.beta(a=10, b=1)  127 dnn = radius*4/np.sqrt(2)/dnn_fraction  128 pars = dict(  129 #sld=1, sld_solvent=0, scale=1, background=1e-32,  130 dnn=dnn,  131 d_factor=d_factor,  132 radius=radius,  133 )  134 return pars  135  131 136 # april 10 2017, rkh add unit tests, NOT compared with any other calc method, assume correct! 132  q = 4.*pi/220. 137 q = 4.*pi/220. 133 138 tests = [ 134  [{ }, 135  [0.001, q, 0.215268], [0.275164706668, 5.7776842567, 0.00958167119232]], 136  [{}, (-0.047,-0.007), 238.103096286], 137  [{}, (0.053,0.063), 0.863609587796 ],  139 [{}, [0.001, q, 0.215268], [0.275164706668, 5.7776842567, 0.00958167119232]],  140 [{}, (-0.047, -0.007), 238.103096286],  141 [{}, (0.053, 0.063), 0.863609587796], 138 142 ] -
sasmodels/models/flexible_cylinder.py
r42356c8 r2573fa1  86 86 source = ["lib/polevl.c", "lib/sas_J1.c", "lib/wrc_cyl.c", "flexible_cylinder.c"] 87 87  88  demo = dict(scale=1.0, background=0.0001, 89  length=1000.0, 90  kuhn_length=100.0, 91  radius=20.0, 92  sld=1.0, 93  sld_solvent=6.3)  88 def random():  89 import numpy as np  90 length = 10**np.random.uniform(2, 6)  91 radius = 10**np.random.uniform(1, 3)  92 kuhn_length = 10**np.random.uniform(-2, 0)*length  93 pars = dict(  94 length=length,  95 radius=radius,  96 kuhn_length=kuhn_length,  97 )  98 return pars 94 99  95 100 tests = [ 96 101 # Accuracy tests based on content in test/utest_other_models.py 97  # Currently fails in OCL 98  # [{'length': 1000.0, 99  # 'kuhn_length': 100.0, 100  # 'radius': 20.0, 101  # 'sld': 1.0, 102  # 'sld_solvent': 6.3, 103  # 'background': 0.0001, 104  # }, 0.001, 3509.2187],  102 [{'length': 1000.0, # test T1  103 'kuhn_length': 100.0,  104 'radius': 20.0,  105 'sld': 1.0,  106 'sld_solvent': 6.3,  107 'background': 0.0001,  108 }, 0.001, 3509.2187], 105 109  106 110 # Additional tests with larger range of parameters 107  [{'length': 1000.0,  111 [{'length': 1000.0, # test T2 108 112 'kuhn_length': 100.0, 109 113 'radius': 20.0, … …  112 116 'background': 0.0001, 113 117 }, 1.0, 0.000595345], 114  [{'length': 10.0,  118 [{'length': 10.0, # test T3 115 119 'kuhn_length': 800.0, 116 120 'radius': 2.0, … …  119 123 'background': 0.001, 120 124 }, 0.1, 1.55228], 121  [{'length': 100.0,  125 [{'length': 100.0, # test T4 122 126 'kuhn_length': 800.0, 123 127 'radius': 50.0, … …  128 132 ] 129 133   134 # There are a few branches in the code that ought to have test values:  135 #  136 # For length > 4 * kuhn_length  137 # if length > 10 * kuhn_length then C is scaled by 3.06 (L/b)^(-0.44)  138 # q*kuhn_length <= 3.1 => Sexv_new  139 # dS/dQ < 0 has different behaviour from dS/dQ >= 0  140 # T2 q*kuhn_length > 3.1 => a_long  141 #  142 # For length <= 4 * kuhn_length  143 # q*kuhn_length <= max(1.9/Rg_short, 3.0) => Sdebye((q*Rg)^2)  144 # q*Rg < 0.5 uses Pade approx, q*Rg > 1.0 uses math lib  145 # T3,T4 q*kuhn_length > max(1.9/Rg_short, 3.0) => a_short  146 #  147 # Note that the transitions between branches may be abrupt. You can see a  148 # several percent change around length=10*kuhn_length and length=4*kuhn_length  149 # using the following:  150 #  151 # sascomp flexible_cylinder -calc=double -sets=10 length=10*kuhn_length,10.000001*kuhn_length  152 # sascomp flexible_cylinder -calc=double -sets=10 length=4*kuhn_length,4.000001*kuhn_length  153 #  154 # The transition between low q and high q around q*kuhn_length = 3 seems  155 # to be good to 4 digits or better. This was tested by computing the value  156 # on each branches near the transition point and reporting the relative error  157 # for kuhn lengths of 10, 100 and 1000 and a variety of length:kuhn_length  158 # ratios. -
sasmodels/models/flexible_cylinder_elliptical.py
r40a87fa r31df0c9  112 112 "flexible_cylinder_elliptical.c"] 113 113  114  demo = dict(scale=1.0, background=0.0001, 115  length=1000.0, 116  kuhn_length=100.0, 117  radius=20.0, 118  axis_ratio=1.5, 119  sld=1.0, 120  sld_solvent=6.3)  114 def random():  115 import numpy as np  116 length = 10**np.random.uniform(2, 6)  117 radius = 10**np.random.uniform(1, 3)  118 axis_ratio = 10**np.random.uniform(-1, 1)  119 kuhn_length = 10**np.random.uniform(-2, -0.7)*length # at least 10 segments  120 pars = dict(  121 length=length,  122 radius=radius,  123 axis_ratio=axis_ratio,  124 kuhn_length=kuhn_length,  125 )  126 return pars 121 127  122 128 tests = [ -
sasmodels/models/fractal.py
rdf89d77 r1511c37c  27 27  28 28 where $\xi$ is the correlation length representing the cluster size and $D_f$ 29  is the fractal dimension, representing the self similarity of the structure. 30  Note that S(q) here goes negative if $D_f$ is too large, and the Gamma function  29 is the fractal dimension, representing the self similarity of the structure.  30 Note that S(q) here goes negative if $D_f$ is too large, and the Gamma function 31 31 diverges at $D_f=0$ and $D_f=1$. 32 32  … …  55 55  56 56 """  57 from __future__ import division 57 58  58 59 from numpy import inf … …  98 99 source = ["lib/sas_3j1x_x.c", "lib/sas_gamma.c", "lib/fractal_sq.c", "fractal.c"] 99 100   101 def random():  102 import numpy as np  103 radius = 10**np.random.uniform(0.7, 4)  104 #radius = 5  105 cor_length = 10**np.random.uniform(0.7, 2)*radius  106 #cor_length = 20*radius  107 volfraction = 10**np.random.uniform(-3, -1)  108 #volfraction = 0.05  109 fractal_dim = 2*np.random.beta(3, 4) + 1  110 #fractal_dim = 2  111 pars = dict(  112 #background=0, sld_block=1, sld_solvent=0,  113 volfraction=volfraction,  114 radius=radius,  115 cor_length=cor_length,  116 fractal_dim=fractal_dim,  117 )  118 return pars  119  100 120 demo = dict(volfraction=0.05, 101 121 radius=5.0, -
sasmodels/models/fractal_core_shell.py
r925ad6e rca04add  22 22 \frac{\sin(qr_c)-qr_c\cos(qr_c)}{(qr_c)^3}+ 23 23 3V_s(\rho_s-\rho_{solv}) 24  \frac{\sin(qr_s)-qr_s\cos(qr_s)}{(qr_s)^3}\right]^2 25   26  S(q) &= 1 + \frac{D_f\ \Gamma\!(D_f-1)}{[1+1/(q\xi)^2]^{(D_f-1)/2}}   24 \frac{\sin(qr_s)-qr_s\cos(qr_s)}{(qr_s)^3}\right]^2 \\  25 S(q) &= 1 + \frac{D_f\ \Gamma\!(D_f-1)}{[1+1/(q\xi)^2]^{(D_f-1)/2}} 27 26 \frac{\sin[(D_f-1)\tan^{-1}(q\xi)]}{(qr_s)^{D_f}} 28 27  … …  31 30 $\rho_{solv}$ are the scattering length densities of the core, shell, and 32 31 solvent respectively, $r_c$ and $r_s$ are the radius of the core and the radius 33  of the whole particle respectively, $D_f$ is the fractal dimension, and |xi|the 32 of the whole particle respectively, $D_f$ is the fractal dimension, and $\xi$ the 34 33 correlation length. 35    34  36 35 Polydispersity of radius and thickness are also provided for. 37 36  … …  98 97 "lib/fractal_sq.c", "fractal_core_shell.c"] 99 98   99 def random():  100 import numpy as np  101 outer_radius = 10**np.random.uniform(0.7, 4)  102 # Use a distribution with a preference for thin shell or thin core  103 # Avoid core,shell radii < 1  104 thickness = np.random.beta(0.5, 0.5)*(outer_radius-2) + 1  105 radius = outer_radius - thickness  106 cor_length = 10**np.random.uniform(0.7, 2)*outer_radius  107 volfraction = 10**np.random.uniform(-3, -1)  108 fractal_dim = 2*np.random.beta(3, 4) + 1  109 pars = dict(  110 #background=0, sld_block=1, sld_solvent=0,  111 volfraction=volfraction,  112 radius=radius,  113 cor_length=cor_length,  114 fractal_dim=fractal_dim,  115 )  116 return pars  117  100 118 demo = dict(scale=0.05, 101 119 background=0, -
sasmodels/models/fuzzy_sphere.py
r925ad6e r31df0c9  105 105 # VR defaults to 1.0 106 106   107 def random():  108 import numpy as np  109 radius = 10**np.random.uniform(1, 4.7)  110 fuzziness = 10**np.random.uniform(-2, -0.5)*radius # 1% to 31% fuzziness  111 pars = dict(  112 radius=radius,  113 fuzziness=fuzziness,  114 )  115 return pars  116  107 117 demo = dict(scale=1, background=0.001, 108 118 sld=1, sld_solvent=3, -
sasmodels/models/gauss_lorentz_gel.py
ra807206 r48462b0  3 3 but typically a physical rather than chemical network. 4 4 It is modeled as a sum of a low-q exponential decay (which happens to 5  give a functional form similar to Guinier scattering, so interpret with  5 give a functional form similar to Guinier scattering, so interpret with 6 6 care) plus a Lorentzian at higher-q values. See also the gel_fit model. 7 7  … …  88 88  89 89   90 def random():  91 import numpy as np  92 gauss_scale = 10**np.random.uniform(1, 3)  93 lorentz_scale = 10**np.random.uniform(1, 3)  94 cor_length_static = 10**np.random.uniform(0, 3)  95 cor_length_dynamic = 10**np.random.uniform(0, 3)  96 pars = dict(  97 #background=0,  98 scale=1,  99 gauss_scale=gauss_scale,  100 lorentz_scale=lorentz_scale,  101 cor_length_static=cor_length_static,  102 cor_length_dynamic=cor_length_dynamic,  103 )  104 return pars  105   106  90 107 demo = dict(scale=1, background=0.1, 91 108 gauss_scale=100.0, -
sasmodels/models/gaussian_peak.py
ra807206 r48462b0  51 51 # VR defaults to 1.0 52 52  53  demo = dict(scale=1, background=0, peak_pos=0.05, sigma=0.005)  53 def random():  54 import numpy as np  55 peak_pos = 10**np.random.uniform(-3, -1)  56 sigma = 10**np.random.uniform(-1.3, -0.3)*peak_pos  57 scale = 10**np.random.uniform(0, 4)  58 pars = dict(  59 #background=1e-8,  60 scale=scale,  61 peak_pos=peak_pos,  62 sigam=sigma,  63 )  64 return pars -
sasmodels/models/gel_fit.c
ra807206 r48462b0  10 10 double lorentzian_term = square(q*cor_length); 11 11 lorentzian_term = 1.0 + ((fractal_dim + 1.0)/3.0)*lorentzian_term; 12  lorentzian_term = pow(lorentzian_term, (fractal_dim/2.0)); 12 lorentzian_term = pow(lorentzian_term, fractal_dim/2.0 ); 13 13  14 14 // Exponential Term 15 15 ////////////////////////double d(x[i]*x[i]*rg*rg); 16 16 double exp_term = square(q*rg); 17  exp_term = exp(- 1.0*(exp_term/3.0)); 17 exp_term = exp(-exp_term/3.0); 18 18  19 19 // Scattering Law -
sasmodels/models/gel_fit.py
ra807206 r48462b0  71 71 source = ["gel_fit.c"] 72 72   73 def random():  74 import numpy as np  75 guinier_scale = 10**np.random.uniform(1, 3)  76 lorentz_scale = 10**np.random.uniform(1, 3)  77 rg = 10**np.random.uniform(1, 5)  78 fractal_dim = np.random.uniform(0, 6)  79 cor_length = 10**np.random.uniform(0, 3)  80 pars = dict(  81 #background=0,  82 scale=1,  83 guinier_scale=guinier_scale,  84 lorentz_scale=lorentz_scale,  85 rg=rg,  86 fractal_dim=fractal_dim,  87 cor_length=cor_length  88 )  89 return pars  90  73 91 demo = dict(background=0.01, 74 92 guinier_scale=1.7, -
sasmodels/models/guinier.py
r3330bb4 r48462b0  33 33 description = """ 34 34 I(q) = scale.exp ( - rg^2 q^2 / 3.0 ) 35    35  36 36 List of default parameters: 37 37 scale = scale … …  49 49 """ 50 50   51 def random():  52 import numpy as np  53 scale = 10**np.random.uniform(1, 4)  54 # Note: compare.py has Rg cutoff of 1e-30 at q=1 for guinier, so use that  55 # log_10 Ae^(-(q Rg)^2/3) = log_10(A) - (q Rg)^2/ (3 ln 10) > -30  56 # => log_10(A) > Rg^2/(3 ln 10) - 30  57 q_max = 1.0  58 rg_max = np.sqrt(90*np.log(10) + 3*np.log(scale))/q_max  59 rg = 10**np.random.uniform(0, np.log10(rg_max))  60 pars = dict(  61 #background=0,  62 scale=scale,  63 rg=rg,  64 )  65 return pars  66  51 67 # parameters for demo 52 68 demo = dict(scale=1.0, rg=60.0) -
sasmodels/models/guinier_porod.py
rcdcebf1 r48462b0  4 4 and dimensionality of scattering objects, including asymmetric objects 5 5 such as rods or platelets, and shapes intermediate between spheres 6  and rods or between rods and platelets, and overcomes some of the  6 and rods or between rods and platelets, and overcomes some of the 7 7 deficiencies of the (Beaucage) Unified_Power_Rg model (see Hammouda, 2010). 8 8  … …  77 77 scale = Guinier Scale 78 78 s = Dimension Variable 79  Rg = Radius of Gyration [A]  79 Rg = Radius of Gyration [A] 80 80 porod_exp = Porod Exponent 81 81 background = Background [1/cm]""" … …  114 114 Iq.vectorized = True # Iq accepts an array of q values 115 115   116 def random():  117 import numpy as np  118 rg = 10**np.random.uniform(1, 5)  119 s = np.random.uniform(0, 3)  120 porod_exp = s + np.random.uniform(0, 3)  121 pars = dict(  122 #scale=1, background=0,  123 rg=rg,  124 s=s,  125 porod_exp=porod_exp,  126 )  127 return pars  128  116 129 demo = dict(scale=1.5, background=0.5, rg=60, s=1.0, porod_exp=3.0) 117 130  -
sasmodels/models/hardsphere.py
r40a87fa r8f04da4  75 75 Iq = r""" 76 76 double D,A,B,G,X,X2,X4,S,C,FF,HARDSPH; 77  // these are c compiler instructions, can also put normal code inside the "if else" structure  77 // these are c compiler instructions, can also put normal code inside the "if else" structure 78 78 #if FLOAT_SIZE > 4 79 79 // double precision orig had 0.2, don't call the variable cutoff as PAK already has one called that! Must use UPPERCASE name please. … …  82 82 #else 83 83 // 0.1 bad, 0.2 OK, 0.3 good, 0.4 better, 0.8 no good 84  #define CUTOFFHS 0.4  84 #define CUTOFFHS 0.4 85 85 #endif 86 86  … …  110 110 if(X < CUTOFFHS) { 111 111 // RKH Feb 2016, use Taylor series expansion for small X 112  // else no obvious way to rearrange the equations to avoid needing a very high number of significant figures. 113  // Series expansion found using Mathematica software. Numerical test in .xls showed terms to X^2 are sufficient 114  // for 5 or 6 significant figures, but I put the X^4 one in anyway  112 // else no obvious way to rearrange the equations to avoid needing a very high number of significant figures.  113 // Series expansion found using Mathematica software. Numerical test in .xls showed terms to X^2 are sufficient  114 // for 5 or 6 significant figures, but I put the X^4 one in anyway 115 115 //FF = 8*A +6*B + 4*G - (0.8*A +2.0*B/3.0 +0.5*G)*X2 +(A/35. +B/40. +G/50.)*X4; 116 116 // refactoring the polynomial makes it very slightly faster (0.5 not 0.6 msec) … …  153 153 """ 154 154   155 def random():  156 import numpy as np  157 pars = dict(  158 scale=1, background=0,  159 radius_effective=10**np.random.uniform(1, 4),  160 volfraction=10**np.random.uniform(-2, 0), # high volume fraction  161 )  162 return pars  163  155 164 # ER defaults to 0.0 156 165 # VR defaults to 1.0 -
sasmodels/models/hayter_msa.py
r5702f52 r8f04da4  64 64 Routine takes absolute value of charge, use HardSphere if charge 65 65 goes to zero. 66  In sasview the effective radius and volume fraction may be calculated  66 In sasview the effective radius and volume fraction may be calculated 67 67 from the parameters used in P(Q). 68 68 """ … …  89 89 # ER defaults to 0.0 90 90 # VR defaults to 1.0  91   92 def random():  93 import numpy as np  94 # TODO: too many failures for random hayter_msa parameters  95 pars = dict(  96 scale=1, background=0,  97 radius_effective=10**np.random.uniform(1, 4.7),  98 volfraction=10**np.random.uniform(-2, 0), # high volume fraction  99 charge=min(int(10**np.random.uniform(0, 1.3)+0.5), 200),  100 temperature=10**np.random.uniform(0, np.log10(450)), # max T = 450  101 #concentration_salt=10**np.random.uniform(-3, 1),  102 dialectconst=10**np.random.uniform(0, 6),  103 #charge=10,  104 #temperature=318.16,  105 concentration_salt=0.0,  106 #dielectconst=71.08,  107 )  108 return pars 91 109  92 110 # default parameter set, use compare.sh -midQ -linear -
sasmodels/models/hollow_cylinder.py
rf102a96 r8f04da4  54 54  55 55 * **Author:** NIST IGOR/DANSE **Date:** pre 2010 56  * **Last Modified by:** Richard Heenan **Date:** October 06, 2016  56 * **Last Modified by:** Richard Heenan **Date:** October 06, 2016 57 57 (reparametrised to use thickness, not outer radius) 58 58 * **Last Reviewed by:** Richard Heenan **Date:** October 06, 2016 … …  121 121 return vol_shell, vol_total 122 122   123 def random():  124 import numpy as np  125 length = 10**np.random.uniform(1, 4.7)  126 outer_radius = 10**np.random.uniform(1, 4.7)  127 # Use a distribution with a preference for thin shell or thin core  128 # Avoid core,shell radii < 1  129 thickness = np.random.beta(0.5, 0.5)*(outer_radius-2) + 1  130 radius = outer_radius - thickness  131 pars = dict(  132 length=length,  133 radius=radius,  134 thickness=thickness,  135 )  136 return pars  137  123 138 # parameters for demo 124 139 demo = dict(scale=1.0, background=0.0, length=400.0, radius=20.0, -
sasmodels/models/hollow_rectangular_prism.py
rab2aea8 r30b60d2  31 31 :nowrap: 32 32  33  \begin{align } 33 \begin{align*} 34 34 A_{P\Delta}(q) & = A B C 35 35 \left[\frac{\sin \bigl( q \frac{C}{2} \cos\theta \bigr)} … …  47 47 \left[ \frac{\sin \bigl[ q \bigl(\frac{B}{2}-\Delta\bigr) \sin\theta \cos\phi \bigr]} 48 48 {q \bigl(\frac{B}{2}-\Delta\bigr) \sin\theta \cos\phi} \right] 49  \end{align } 49 \end{align*} 50 50  51 51 where $A$, $B$ and $C$ are the external sides of the parallelepiped fulfilling … …  146 146  147 147   148 def random():  149 import numpy as np  150 a, b, c = 10**np.random.uniform(1, 4.7, size=3)  151 # Thickness is limited to 1/2 the smallest dimension  152 # Use a distribution with a preference for thin shell or thin core  153 # Avoid core,shell radii < 1  154 min_dim = 0.5*min(a, b, c)  155 thickness = np.random.beta(0.5, 0.5)*(min_dim-2) + 1  156 #print(a, b, c, thickness, thickness/min_dim)  157 pars = dict(  158 length_a=a,  159 b2a_ratio=b/a,  160 c2a_ratio=c/a,  161 thickness=thickness,  162 )  163 return pars  164   165  148 166 # parameters for demo 149 167 demo = dict(scale=1, background=0, -
sasmodels/models/hollow_rectangular_prism_thin_walls.py
rab2aea8 r31df0c9  127 127  128 128   129 def random():  130 import numpy as np  131 a, b, c = 10**np.random.uniform(1, 4.7, size=3)  132 pars = dict(  133 length_a=a,  134 b2a_ratio=b/a,  135 c2a_ratio=c/a,  136 )  137 return pars  138   139  129 140 # parameters for demo 130 141 demo = dict(scale=1, background=0, -
sasmodels/models/lamellar.py
r40a87fa r1511c37c  89 89 """ 90 90   91 def random():  92 import numpy as np  93 thickness = 10**np.random.uniform(1, 4)  94 pars = dict(  95 thickness=thickness,  96 )  97 return pars  98  91 99 # ER defaults to 0.0 92 100 # VR defaults to 1.0 -
sasmodels/models/lamellar_hg.py
ra807206 r1511c37c  98 98 """ 99 99   100 def random():  101 import numpy as np  102 thickness = 10**np.random.uniform(1, 4)  103 length_head = thickness * np.random.uniform(0, 1)  104 length_tail = thickness - length_head  105 pars = dict(  106 length_head=length_head,  107 length_tail=length_tail,  108 )  109 return pars  110  100 111 # ER defaults to 0.0 101 112 # VR defaults to 1.0 -
sasmodels/models/lamellar_hg_stack_caille.py
ra57b31d r1511c37c  123 123 # VR defaults to 1.0 124 124   125 def random():  126 import numpy as np  127 total_thickness = 10**np.random.uniform(2, 4.7)  128 Nlayers = np.random.randint(2, 200)  129 d_spacing = total_thickness / Nlayers  130 thickness = d_spacing * np.random.uniform(0, 1)  131 length_head = thickness * np.random.uniform(0, 1)  132 length_tail = thickness - length_head  133 Caille_parameter = np.random.uniform(0, 0.8)  134 pars = dict(  135 length_head=length_head,  136 length_tail=length_tail,  137 Nlayers=Nlayers,  138 d_spacing=d_spacing,  139 Caille_parameter=Caille_parameter,  140 )  141 return pars  142  125 143 demo = dict( 126 144 scale=1, background=0, -
sasmodels/models/lamellar_stack_caille.py
ra57b31d r1511c37c  98 98 source = ["lamellar_stack_caille.c"] 99 99   100 def random():  101 import numpy as np  102 total_thickness = 10**np.random.uniform(2, 4.7)  103 Nlayers = np.random.randint(2, 200)  104 d_spacing = total_thickness / Nlayers  105 thickness = d_spacing * np.random.uniform(0, 1)  106 Caille_parameter = np.random.uniform(0, 0.8)  107 pars = dict(  108 thickness=thickness,  109 Nlayers=Nlayers,  110 d_spacing=d_spacing,  111 Caille_parameter=Caille_parameter,  112 )  113 return pars  114  100 115 # No volume normalization despite having a volume parameter 101 116 # This should perhaps be volume normalized? -
sasmodels/models/lamellar_stack_paracrystal.py
ra0168e8 r8f04da4  130 130 form_volume = """ 131 131 return 1.0; 132  """  132 """  133   134 def random():  135 import numpy as np  136 total_thickness = 10**np.random.uniform(2, 4.7)  137 Nlayers = np.random.randint(2, 200)  138 d_spacing = total_thickness / Nlayers  139 thickness = d_spacing * np.random.uniform(0, 1)  140 # Let polydispersity peak around 15%; 95% < 0.4; max=100%  141 sigma_d = np.random.beta(1.5, 7)  142 pars = dict(  143 thickness=thickness,  144 Nlayers=Nlayers,  145 d_spacing=d_spacing,  146 sigma_d=sigma_d,  147 )  148 return pars 133 149  134 150 # ER defaults to 0.0 -
sasmodels/models/lib/wrc_cyl.c
r92ce163 r18a2bfc  2 2 Functions for WRC implementation of flexible cylinders 3 3 */ 4  double Sk_WR(double q, double L, double b); 5   6   7  static double 8  AlphaSquare(double x) 9  { 10  // Potentially faster. Needs proper benchmarking. 11  // add native_powr to kernel_template 12  //double t = native_powr( (1.0 + (x/3.12)*(x/3.12) + 13  // (x/8.67)*(x/8.67)*(x/8.67)),(0.176/3.0) ); 14  //return t; 15   16  return pow(1.0+square(x/3.12)+cube(x/8.67), 0.176/3.0); 17  } 18   19  // 20  static double 21  Rgsquarezero(double q, double L, double b) 22  { 23  const double r = b/L; 24  return (L*b/6.0) * (1.0 + r*(-1.5 + r*(1.5 + r*0.75*expm1(-2.0/r)))); 25   26  } 27   28  // 29  static double 30  Rgsquareshort(double q, double L, double b) 31  { 32  return AlphaSquare(L/b) * Rgsquarezero(q,L,b); 33  } 34   35  // 36  static double 37  Rgsquare(double q, double L, double b) 38  { 39  return AlphaSquare(L/b)*L*b/6.0; 40  } 41   42  static double 43  sech_WR(double x) 44  { 45  return(1/cosh(x)); 46  } 47   48  static double 49  a1long(double q, double L, double b, double p1, double p2, double q0) 50  { 51  double C; 52   53  if( L/b > 10.0) { 54  C = 3.06/pow((L/b),0.44);  4   5 static double  6 Rgsquare(double L, double b)  7 {  8 const double x = L/b;  9 // Use Horner's method to evaluate Pedersen eq 15:  10 // alpha^2 = [1.0 + (x/3.12)^2 + (x/8.67)^3] ^ (0.176/3)  11 const double alphasq =  12 pow(1.0 + x*x*(1.027284681130835e-01 + 1.534414548417740e-03*x),  13 5.866666666666667e-02);  14 return alphasq*L*b/6.0;  15 }  16   17 static double  18 Rgsquareshort(double L, double b)  19 {  20 const double r = b/L; // = 1/n_b in Pedersen ref.  21 return Rgsquare(L, b) * (1.0 + r*(-1.5 + r*(1.5 + r*0.75*expm1(-2.0/r))));  22 }  23   24 static double  25 w_WR(double x)  26 {  27 // Pedersen eq. 16:  28 // w = [1 + tanh((x-C4)/C5)]/2  29 const double C4 = 1.523;  30 const double C5 = 0.1477;  31 return 0.5 + 0.5*tanh((x - C4)/C5);  32 }  33   34 static double  35 Sdebye(double qsq)  36 {  37 #if FLOAT_SIZE>4  38 # define DEBYE_CUTOFF 0.25 // 1e-15 error  39 #else  40 # define DEBYE_CUTOFF 1.0 // 4e-7 error  41 #endif  42   43 if (qsq < DEBYE_CUTOFF) {  44 const double x = qsq;  45 // mathematica: PadeApproximant[2*Exp[-x^2] + x^2-1)/x^4, {x, 0, 8}]  46 const double A1=1./15., A2=1./60, A3=0., A4=1./75600.;  47 const double B1=2./5., B2=1./15., B3=1./180., B4=1./5040.;  48 return ((((A4*x + A3)*x + A2)*x + A1)*x + 1.)  49 / ((((B4*x + B3)*x + B2)*x + B1)*x + 1.); 55 50 } else { 56  C = 1.0; 51 return 2.*(expm1(-qsq) + qsq)/(qsq*qsq); 57 52 } 58    53 }  54   55 static double  56 a_long(double q, double L, double b/*, double p1, double p2, double q0*/)  57 {  58 const double p1 = 4.12;  59 const double p2 = 4.42;  60 const double q0 = 3.1;  61   62 // Constants C1, ..., C5 derived from least squares fit (Pedersen, eq 13,16) 59 63 const double C1 = 1.22; 60 64 const double C2 = 0.4288; … …  64 68 const double miu = 0.585; 65 69  66  const double Rg2 = Rgsquare(q,L,b); 67  const double Rg22 = Rg2*Rg2; 68  const double Rg = sqrt(Rg2); 69  const double Rgb = Rg*q0/b; 70   71  const double b2 = b*b;  70 const double C = (L/b>10.0 ? 3.06*pow(L/b, -0.44) : 1.0);  71 //printf("branch B-%d q=%g L=%g b=%g\n", C==1.0, q, L, b);  72 const double r2 = Rgsquare(L,b);  73 const double r = sqrt(r2);  74 const double qr_b = q0*r/b;  75 const double qr_b_sq = qr_b*qr_b;  76 const double qr_b_4 = qr_b_sq*qr_b_sq;  77 const double qr_b_miu = pow(qr_b, -1.0/miu);  78 const double em1_qr_b_sq = expm1(-qr_b_sq);  79 const double sech2 = 1.0/square(cosh((qr_b-C4)/C5));  80 const double w = w_WR(qr_b);  81   82 const double t1 = pow(q0, 1.0 + p1 + p2)/(b*(p1-p2));  83 const double t2 = C/(15.0*L) * (  84 + 14.0*b*b*em1_qr_b_sq/(q0*qr_b_sq)  85 + 2.0*q0*r2*exp(-qr_b_sq)*(11.0 + 7.0/qr_b_sq));  86 const double t11 = ((C3*qr_b_miu + C2)*qr_b_miu + C1)*qr_b_miu;  87 const double t3 = r*sech2/(2.*C5)*t11;  88 const double t4 = r*(em1_qr_b_sq + qr_b_sq)*sech2 / (C5*qr_b_4);  89 const double t5 = -4.0*r*qr_b*em1_qr_b_sq/qr_b_4 * (1.0 - w);  90 const double t10 = 2.0*(em1_qr_b_sq + qr_b_sq)/qr_b_4 * (1.0 - w); //=Sdebye*(1-w)  91 const double t6 = 4.0*b/q0 * t10;  92 const double t7 = r*((-3.0*C3*qr_b_miu -2.0*C2)*qr_b_miu -1.0*C1)*qr_b_miu/(miu*qr_b);  93 const double t9 = C*b/L * (4.0 - exp(-qr_b_sq) * (11.0 + 7.0/qr_b_sq) + 7.0/qr_b_sq)/15.0;  94 const double t12 = b*b*M_PI/(L*q0*q0) + t2 + t3 - t4 + t5 - t6 + t7*w;  95 const double t13 = -b*M_PI/(L*q0) + t9 + t10 + t11*w;  96   97 const double a1 = pow(q0,p1)*t13 - t1*pow(q0,-p2)*(t12 + b*p1/q0*t13);  98 const double a2 = t1*pow(q0,-p1)*(t12 + b*p1/q0*t13);  99   100 const double ans = a1*pow(q*b, -p1) + a2*pow(q*b, -p2) + M_PI/(q*L);  101 return ans;  102 }  103   104 static double  105 _short(double r2, double exp_qr_b, double L, double b, double p1short,  106 double p2short, double q0)  107 {  108 const double qr2 = q0*q0 * r2; 72 109 const double b3 = b*b*b; 73  const double b4 = b3*b; 74  const double q02 = q0*q0; 75  const double q03 = q0*q0*q0; 76  const double q04 = q03*q0; 77  const double q05 = q04*q0; 78   79  const double Rg02 = Rg2*q02; 80   81  const double t1 = (b*C*((4.0/15.0 - pow((double)M_E,(-(Rg02/b2))) * 82  ((11.0/15.0 + (7.0*b2)/(15.0*Rg02))) + 83  (7.0*b2)/(15.0*Rg02)))); 84   85  const double t2 = (2.0*b4*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 86  Rg02/b2))*((1.0 + 0.5*(((-1.0) - 87  tanh((-C4 + Rgb/C5))))))); 88   89  const double t3 = ((C3*pow(Rgb,((-3.0)/miu)) + 90  C2*pow(Rgb,((-2.0)/miu)) + 91  C1*pow(Rgb,((-1.0)/miu)))); 92   93  const double t4 = ((1.0 + tanh(((-C4) + Rgb)/C5))); 94   95  const double t5 = (1.0/(b*p1*pow(q0,((-1.0) - p1 - p2)) - 96  b*p2*pow(q0,((-1.0) - p1 - p2)))); 97   98  const double t6 = (b*C*(((-((14.0*b3)/(15.0*q03*Rg2))) + 99  (14.0*b3*pow((double)M_E,(-(Rg02/b2))))/(15.0*q03*Rg2) + 100  (2.0*pow((double)M_E,(-(Rg02/b2)))*q0*((11.0/15.0 + 101  (7.0*b2)/(15.0*Rg02)))*Rg2)/b))); 102   103  const double t7 = (Rg*((C3*pow(((Rgb)),((-3.0)/miu)) + 104  C2*pow(((Rgb)),((-2.0)/miu)) + 105  C1*pow(((Rgb)),((-1.0)/miu))))*pow(sech_WR(((-C4) + 106  Rgb)/C5),2)); 107   108  const double t8 = (b4*Rg*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 109  Rg02/b2))*pow(sech_WR(((-C4) + Rgb)/C5),2)); 110   111  const double t9 = (2.0*b4*(((2.0*q0*Rg2)/b - 112  (2.0*pow((double)M_E,(-(Rg02/b2)))*q0*Rg2)/b))*((1.0 + 0.5*(((-1.0) - 113  tanh(((-C4) + Rgb)/C5)))))); 114   115  const double t10 = (8.0*b4*b*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 116  Rg02/b2))*((1.0 + 0.5*(((-1.0) - tanh(((-C4) + 117  Rgb)/C5)))))); 118   119  const double t11 = (((-((3.0*C3*Rg*pow(((Rgb)),((-1.0) - 120  3.0/miu)))/miu)) - (2.0*C2*Rg*pow(((Rgb)),((-1.0) - 121  2.0/miu)))/miu - (C1*Rg*pow(((Rgb)),((-1.0) - 122  1.0/miu)))/miu)); 123   124  const double t12 = ((1.0 + tanh(((-C4) + Rgb)/C5))); 125   126  const double t13 = (b*C*((4.0/15.0 - pow((double)M_E,(-(Rg02/b2)))*((11.0/15.0 + 127  (7.0*b2)/(15.0*q02* Rg2))) + 128  (7.0*b2)/(15.0*Rg02)))); 129   130  const double t14 = (2.0*b4*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 131  Rg02/b2))*((1.0 + 0.5*(((-1.0) - tanh(((-C4) + 132  Rgb)/C5)))))); 133   134  const double t15 = ((C3*pow(((Rgb)),((-3.0)/miu)) + 135  C2*pow(((Rgb)),((-2.0)/miu)) + 136  C1*pow(((Rgb)),((-1.0)/miu)))); 137   138   139  double yy = (pow(q0,p1)*(((-((b*M_PI)/(L*q0))) +t1/L +t2/(q04*Rg22) + 140  0.5*t3*t4)) + (t5*((pow(q0,(p1 - p2))* 141  (((-pow(q0,(-p1)))*(((b2*M_PI)/(L*q02) +t6/L +t7/(2.0*C5) - 142  t8/(C5*q04*Rg22) + t9/(q04*Rg22) -t10/(q05*Rg22) + 0.5*t11*t12)) - 143  b*p1*pow(q0,((-1.0) - p1))*(((-((b*M_PI)/(L*q0))) + t13/L + 144  t14/(q04*Rg22) + 0.5*t15*((1.0 + tanh(((-C4) + 145  Rgb)/C5))))))))))); 146   147  return (yy); 148  } 149   150  static double 151  a2long(double q, double L, double b, double p1, double p2, double q0) 152  { 153  double C; 154   155  if( L/b > 10.0) { 156  C = 3.06/pow((L/b),0.44); 157  } else { 158  C = 1.0; 159  } 160   161  const double C1 = 1.22; 162  const double C2 = 0.4288; 163  const double C3 = -1.651; 164  const double C4 = 1.523; 165  const double C5 = 0.1477; 166  const double miu = 0.585; 167   168  const double Rg2 = Rgsquare(q,L,b); 169  const double Rg22 = Rg2*Rg2; 170  const double b2 = b*b; 171  const double b3 = b*b*b; 172  const double b4 = b3*b; 173  const double q02 = q0*q0; 174  const double q03 = q0*q0*q0; 175  const double q04 = q03*q0; 176  const double q05 = q04*q0; 177  const double Rg = sqrt(Rg2); 178  const double Rgb = Rg*q0/b; 179  const double Rg02 = Rg2*q02; 180   181  const double t1 = (1.0/(b* p1*pow(q0,((-1.0) - p1 - p2)) - 182  b*p2*pow(q0,((-1.0) - p1 - p2)) )); 183   184  const double t2 = (b*C*(((-1.0*((14.0*b3)/(15.0*q03*Rg2))) + 185  (14.0*b3*pow((double)M_E,(-(Rg02/b2))))/(15.0*q03*Rg2) + 186  (2.0*pow((double)M_E,(-(Rg02/b2)))*q0*((11.0/15.0 + 187  (7*b2)/(15.0*Rg02)))*Rg2)/b)))/L; 188   189  const double t3 = (Rg*((C3*pow(((Rgb)),((-3.0)/miu)) + 190  C2*pow(((Rgb)),((-2.0)/miu)) + 191  C1*pow(((Rgb)),((-1.0)/miu))))* 192  pow(sech_WR(((-C4) +Rgb)/C5),2.0))/(2.0*C5); 193   194  const double t4 = (b4*Rg*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 195  Rg02/b2))*pow(sech_WR(((-C4) + 196  Rgb)/C5),2))/(C5*q04*Rg22); 197   198  const double t5 = (2.0*b4*(((2.0*q0*Rg2)/b - 199  (2.0*pow((double)M_E,(-(Rg02/b2)))*q0*Rg2)/b))* 200  ((1.0 + 0.5*(((-1.0) - tanh(((-C4) + 201  Rgb)/C5))))))/(q04*Rg22); 202   203  const double t6 = (8.0*b4*b*(((-1.0) + pow((double)M_E,(-(Rg02/b2))) + 204  Rg02/b2))*((1.0 + 0.5*(((-1) - tanh(((-C4) + 205  Rgb)/C5))))))/(q05*Rg22); 206   207  const double t7 = (((-((3.0*C3*Rg*pow(((Rgb)),((-1.0) - 208  3.0/miu)))/miu)) - (2.0*C2*Rg*pow(((Rgb)), 209  ((-1.0) - 2.0/miu)))/miu - (C1*Rg*pow(((Rgb)), 210  ((-1.0) - 1.0/miu)))/miu)); 211   212  const double t8 = ((1.0 + tanh(((-C4) + Rgb)/C5))); 213   214  const double t9 = (b*C*((4.0/15.0 - pow((double)M_E,(-(Rg02/b2)))*((11.0/15.0 + 215  (7.0*b2)/(15*Rg02))) + (7.0*b2)/(15.0*Rg02))))/L; 216   217  const double t10 = (2.0*b4*(((-1) + pow((double)M_E,(-(Rg02/b2))) + 218  Rg02/b2))*((1.0 + 0.5*(((-1) - tanh(((-C4) + 219  Rgb)/C5))))))/(q04*Rg22); 220   221  const double yy = ((-1.0*(t1* ((-pow(q0,-p1)*(((b2*M_PI)/(L*q02) + 222  t2 + t3 - t4 + t5 - t6 + 0.5*t7*t8)) - b*p1*pow(q0,((-1.0) - p1))* 223  (((-((b*M_PI)/(L*q0))) + t9 + t10 + 224  0.5*((C3*pow(((Rgb)),((-3.0)/miu)) + 225  C2*pow(((Rgb)),((-2.0)/miu)) + 226  C1*pow(((Rgb)),((-1.0)/miu))))* 227  ((1.0 + tanh(((-C4) + Rgb)/C5)))))))))); 228   229  return (yy); 230  } 231   232  // 233  static double 234  a_short(double q, double L, double b, double p1short, 235  double p2short, double factor, double pdiff, double q0) 236  { 237  const double Rg2_sh = Rgsquareshort(q,L,b); 238  const double Rg2_sh2 = Rg2_sh*Rg2_sh; 239  const double b3 = b*b*b; 240  const double t1 = ((q0*q0*Rg2_sh)/(b*b)); 241  const double Et1 = exp(t1); 242  const double Emt1 = 1.0/Et1; 243  const double q02 = q0*q0; 244  const double q0p = pow(q0,(-4.0 + p1short) ); 245   246  double yy = ((factor/(L*(pdiff)*Rg2_sh2)* 247  ((b*Emt1*q0p*((8.0*b3*L - 8.0*b3*Et1*L - 2.0*b3*L*p2short + 248  2.0*b3*Et1*L*p2short + 4.0*b*L*q02*Rg2_sh + 4.0*b*Et1*L*q02*Rg2_sh - 249  2.0*b*Et1*L*p2short*q02*Rg2_sh - Et1*M_PI*q02*q0*Rg2_sh2 + 250  Et1*p2short*M_PI*q02*q0*Rg2_sh2)))))); 251   252  return(yy); 253  } 254  static double 255  a1short(double q, double L, double b, double p1short, double p2short, double q0) 256  { 257   258  double factor = 1.0; 259  return a_short(q, L, b, p1short, p2short, factor, p1short - p2short, q0); 260  } 261   262  static double 263  a2short(double q, double L, double b, double p1short, double p2short, double q0) 264  { 265  double factor = -1.0; 266  return a_short(q, L, b, p2short, p1short, factor, p1short-p2short, q0); 267  } 268   269  //WR named this w (too generic) 270  static double 271  w_WR(double x) 272  { 273  return 0.5*(1 + tanh((x - 1.523)/0.1477)); 274  } 275   276  // 277  static double 278  u1(double q, double L, double b) 279  { 280  return Rgsquareshort(q,L,b)*q*q; 281  } 282   283  static double 284  u_WR(double q, double L, double b) 285  { 286  return Rgsquare(q,L,b)*q*q; 287  } 288   289  static double 290  Sdebye_kernel(double arg) 291  { 292  // ORIGINAL 293  double result = 2.0*(exp(-arg) + arg -1.0)/(arg*arg); 294   295  // CONVERSION 1 from http://herbie.uwplse.org/ 296  // 297  // exhibits discontinuity - needs more investigation 298  //double a1 = 1.0/6.0; 299  //double a2 = 1.0/72.0; 300  //double a3 = 1.0/24.0; 301  //double result = pow((1.0 - a1*arg - (a2+a3)*arg*arg), 2); 302   303  return result; 304  } 305  static double 306  Sdebye(double q, double L, double b) 307  { 308  double arg = u_WR(q,L,b); 309  return Sdebye_kernel(arg); 310  } 311   312  // 313  static double 314  Sdebye1(double q, double L, double b) 315  { 316  double arg = u1(q,L,b); 317  return Sdebye_kernel(arg); 318   319  } 320   321  //  110 const double q0p = pow(q0, -4.0 + p1short);  111   112 double yy = 1.0/(L*r2*r2) * b/exp_qr_b*q0p  113 * (8.0*b3*L  114 - 8.0*b3*exp_qr_b*L  115 + 2.0*b3*exp_qr_b*L*p2short  116 - 2.0*b*exp_qr_b*L*p2short*qr2  117 + 4.0*b*exp_qr_b*L*qr2  118 - 2.0*b3*L*p2short  119 + 4.0*b*L*qr2  120 - M_PI*exp_qr_b*qr2*q0*r2  121 + M_PI*exp_qr_b*p2short*qr2*q0*r2);  122   123 return yy;  124 }  125 static double  126 a_short(double qp, double L, double b  127 /*double p1short, double p2short*/, double q0)  128 {  129 const double p1short = 5.36;  130 const double p2short = 5.62;  131   132 const double r2 = Rgsquareshort(L,b);  133 const double exp_qr_b = exp(r2*square(q0/b));  134 const double pdiff = p1short - p2short;  135 const double a1 = _short(r2,exp_qr_b,L,b,p1short,p2short,q0)/pdiff;  136 const double a2= -_short(r2,exp_qr_b,L,b,p2short,p1short,q0)/pdiff;  137 const double ans = a1*pow(qp*b, -p1short) + a2*pow(qp*b, -p2short) + M_PI/(qp*L);  138 return ans;  139 }  140   141  322 142 static double 323 143 Sexv(double q, double L, double b) 324 144 { 325    145 // Pedersen eq 13, corrected by Chen eq A.5, swapping w and 1-w 326 146 const double C1=1.22; 327 147 const double C2=0.4288; 328 148 const double C3=-1.651; 329 149 const double miu = 0.585; 330  const double qRg = q*sqrt(Rgsquare(q,L,b)); 331  const double x = pow(qRg, -1.0/miu); 332   333  double yy = (1.0 - w_WR(qRg))*Sdebye(q,L,b) + 334  w_WR(qRg)*x*(C1 + x*(C2 + x*C3)); 335  return (yy); 336  } 337   338   339  static double 340  Sexvnew(double q, double L, double b) 341  { 342  double yy; 343   344  const double C1 =1.22; 345  const double C2 =0.4288; 346  const double C3 =-1.651; 347  const double miu = 0.585;  150 const double qr = q*sqrt(Rgsquare(L,b));  151 const double qr_miu = pow(qr, -1.0/miu);  152 const double w = w_WR(qr);  153 const double t10 = Sdebye(qr*qr)*(1.0 - w);  154 const double t11 = ((C3*qr_miu + C2)*qr_miu + C1)*qr_miu;  155   156 return t10 + w*t11;  157 }  158   159 // Modified by Yun on Oct. 15,  160 static double  161 Sexv_new(double q, double L, double b)  162 {  163 const double qr = q*sqrt(Rgsquare(L,b));  164 const double qr2 = qr*qr;  165 const double C = (L/b > 10.0) ? 3.06*pow(L/b, -0.44) : 1.0;  166 const double t9 = C*b/L * (4.0 - exp(-qr2) * (11.0 + 7.0/qr2) + 7.0/qr2)/15.0;  167   168 const double Sexv_orig = Sexv(q, L, b);  169   170 // calculating the derivative to decide on the correction (cutoff) term?  171 // Note: this is modified from WRs original code 348 172 const double del=1.05; 349  const double qRg = q*sqrt(Rgsquare(q,L,b)); 350  const double x = pow(qRg, -1.0/miu); 351   352   353  //calculating the derivative to decide on the corection (cutoff) term? 354  // I have modified this from WRs original code 355  const double qdel = (Sexv(q*del,L,b)-Sexv(q,L,b))/(q*del - q); 356  const double C_star2 =(qdel >= 0.0) ? 0.0 : 1.0; 357   358  yy = (1.0 - w_WR(qRg))*Sdebye(q,L,b) + 359  C_star2*w_WR(qRg)* 360  x*(C1 + x*(C2 + x*C3)); 361   362  return (yy); 363  } 364   365  double Sk_WR(double q, double L, double b) 366  { 367  // 368  const double p1 = 4.12; 369  const double p2 = 4.42; 370  const double p1short = 5.36; 371  const double p2short = 5.62; 372  const double q0 = 3.1; 373   374  double q0short = fmax(1.9/sqrt(Rgsquareshort(q,L,b)),3.0); 375  double Sexvmodify, ans; 376   377  const double C =(L/b > 10.0) ? 3.06/pow((L/b),0.44) : 1.0; 378   379  if( L > 4*b ) { // Longer Chains 380  if (q*b <= 3.1) { //Modified by Yun on Oct. 15, 381  Sexvmodify = Sexvnew(q, L, b); 382  ans = Sexvmodify + C * (4.0/15.0 + 7.0/(15.0*u_WR(q,L,b)) - 383  (11.0/15.0 + 7.0/(15.0*u_WR(q,L,b)))*exp(-u_WR(q,L,b)))*(b/L); 384   385  } else { //q(i)*b > 3.1 386  ans = a1long(q, L, b, p1, p2, q0)/(pow((q*b),p1)) + 387  a2long(q, L, b, p1, p2, q0)/(pow((q*b),p2)) + M_PI/(q*L); 388  } 389  } else { //L <= 4*b Shorter Chains 390  if (q*b <= fmax(1.9/sqrt(Rgsquareshort(q,L,b)),3.0) ) { 391  if (q*b<=0.01) { 392  ans = 1.0 - Rgsquareshort(q,L,b)*(q*q)/3.0; 393  } else { 394  ans = Sdebye1(q,L,b); 395  } 396  } else { //q*b > max(1.9/sqrt(Rgsquareshort(q(i),L,b)),3) 397  ans = a1short(q,L,b,p1short,p2short,q0short)/(pow((q*b),p1short)) + 398  a2short(q,L,b,p1short,p2short,q0short)/(pow((q*b),p2short)) + 399  M_PI/(q*L); 400  }  173 const double qdel = (Sexv(q*del,L,b) - Sexv_orig)/(q*(del - 1.0));  174   175 if (qdel < 0) {  176 //printf("branch A1-%d q=%g L=%g b=%g\n", C==1.0, q, L, b);  177 return t9 + Sexv_orig;  178 } else {  179 //printf("branch A2-%d q=%g L=%g b=%g\n", C==1.0, q, L, b);  180 const double w = w_WR(qr);  181 const double t10 = Sdebye(qr*qr)*(1.0 - w);  182 return t9 + t10; 401 183 } 402   403  return(ans); 404  }  184 }  185   186   187 static double  188 Sk_WR(double q, double L, double b)  189 {  190 const double Rg_short = sqrt(Rgsquareshort(L, b));  191 double q0short = fmax(1.9/Rg_short, 3.0);  192 double ans;  193   194   195 if( L > 4*b ) { // L > 4*b : Longer Chains  196 if (q*b <= 3.1) {  197 ans = Sexv_new(q, L, b);  198 } else { //q(i)*b > 3.1  199 ans = a_long(q, L, b /*, p1, p2, q0*/);  200 }  201 } else { // L <= 4*b : Shorter Chains  202 if (q*b <= q0short) { // q*b <= fmax(1.9/Rg_short, 3)  203 //printf("branch C-%d q=%g L=%g b=%g\n", square(q*Rg_short)<DEBYE_CUTOFF, q, L, b);  204 // Note that q0short is usually 3, but it will be greater than 3  205 // small enough b, depending on the L/b ratio:  206 // L/b == 1 => b < 2.37  207 // L/b == 2 => b < 1.36  208 // L/b == 3 => b < 1.00  209 // L/b == 4 => b < 0.816  210 // 2017-10-01 pkienzle: moved low q approximation into Sdebye()  211 ans = Sdebye(square(q*Rg_short));  212 } else { // q*b > max(1.9/Rg_short, 3)  213 //printf("branch D q=%g L=%g b=%g\n", q, L, b);  214 ans = a_short(q, L, b /*, p1short, p2short*/, q0short);  215 }  216 }  217   218 return ans;  219 } -
sasmodels/models/line.py
r3330bb4 rc63a7c8  15 15  16 16 .. math::  17  17 18 I(q) = \text{scale} (I(qx) \cdot I(qy)) + \text{background} 18 19  … …  68 69 Iqxy.vectorized = True # Iqxy accepts an array of qx qy values 69 70   71 def random():  72 import numpy as np  73 scale = 10**np.random.uniform(0, 3)  74 slope = np.random.uniform(-1, 1)*1e2  75 offset = 1e-5 + (0 if slope > 0 else -slope)  76 intercept = 10**np.random.uniform(0, 1) + offset  77 pars = dict(  78 #background=0,  79 scale=scale,  80 slope=slope,  81 intercept=intercept,  82 )  83 return pars 70 84  71 85 tests = [ -
sasmodels/models/linear_pearls.py
rc3ccaec r8f04da4  65 65 source = ["lib/sas_3j1x_x.c", "linear_pearls.c"] 66 66  67  demo = dict(scale=1.0, background=0.0, 68  radius=80.0, 69  edge_sep=350.0, 70  num_pearls=3, 71  sld=1.0, 72  sld_solvent=6.3)  67 def random():  68 import numpy as np  69 radius = 10**np.random.uniform(1, 3) # 1 - 1000  70 edge_sep = 10**np.random.uniform(0, 3) # 1 - 1000  71 num_pearls = np.round(10**np.random.uniform(0.3, 3)) # 2 - 1000  72 pars = dict(  73 radius=radius,  74 edge_sep=edge_sep,  75 num_pearls=num_pearls,  76 )  77 return pars 73 78  74 79 """ -
sasmodels/models/lorentz.py
r2c74c11 r404ebbd  30 30 description = """ 31 31 Model that evaluates a Lorentz (Ornstein-Zernicke) model. 32    32  33 33 I(q) = scale/( 1 + (q*L)^2 ) + bkd 34   35  The model has three parameters: 36  length =screening LengthÂ37  scale =scale factorÂ38  background =incoherent background 34   35 The model has three parameters:  36 length = screening Length  37 scale = scale factor  38 background = incoherent background 39 39 """ 40 40 category = "shape-independent" … …  48 48 """ 49 49   50 def random():  51 import numpy as np  52 pars = dict(  53 #background=0,  54 scale=10**np.random.uniform(1, 4),  55 cor_length=10**np.random.uniform(0, 3),  56 )  57 return pars  58  50 59 # parameters for demo 51 60 demo = dict(scale=1.0, background=0.0, cor_length=50.0) -
sasmodels/models/mass_fractal.py
r925ad6e r4553dae  28 28 .. math:: 29 29  30  scale = scale\_factor \times NV^2(\rho_ {particle} - \rho_{solvent})^2 30 scale = scale\_factor \times NV^2(\rho_\text{particle} - \rho_\text{solvent})^2 31 31  32 32 .. math:: … …  35 35  36 36 where $R$ is the radius of the building block, $D_m$ is the **mass** fractal 37  dimension, | \zeta\| is the cut-off length, $\rho_{solvent}$ is the scatteringÂ38  length density of the solvent, Â39  and $\rho_{particle}$ is the scatteringlength density of particles. 37 dimension, $\zeta$ is the cut-off length, $\rho_\text{solvent}$ is the scattering  38 length density of the solvent, and $\rho_\text{particle}$ is the scattering  39 length density of particles. 40 40  41 41 .. note:: 42 42  43 43 The mass fractal dimension ( $D_m$ ) is only 44  valid if $ 0< mass\_dim < 6$. It is also only valid over a limited 44 valid if $1 < mass\_dim < 6$. It is also only valid over a limited 45 45 $q$ range (see the reference for details). 46 46  … …  88 88 source = ["lib/sas_3j1x_x.c", "lib/sas_gamma.c", "mass_fractal.c"] 89 89   90 def random():  91 import numpy as np  92 radius = 10**np.random.uniform(0.7, 4)  93 cutoff_length = 10**np.random.uniform(0.7, 2)*radius  94 # TODO: fractal dimension should range from 1 to 6  95 fractal_dim_mass = 2*np.random.beta(3, 4) + 1  96 Vf = 10**np.random.uniform(-4, -1)  97 pars = dict(  98 #background=0,  99 scale=1, #1e5*Vf/radius**(fractal_dim_mass),  100 radius=radius,  101 cutoff_length=cutoff_length,  102 fractal_dim_mass=fractal_dim_mass,  103 )  104 return pars  105  90 106 demo = dict(scale=1, background=0, 91 107 radius=10.0, -
sasmodels/models/mass_surface_fractal.py
r6d96b66 rca04add  22 22 .. math:: 23 23  24  I(q) = scale \times P(q) + background 25    24 I(q) = scale \times P(q) + background \\ 26 25 P(q) = \left\{ \left[ 1+(q^2a)\right]^{D_m/2} \times 27 26 \left[ 1+(q^2b)\right]^{(6-D_s-D_m)/2} 28  \right\}^{-1} 29   30  a = R_{g}^2/(3D_m/2) 31   32  b = r_{g}^2/[-3(D_s+D_m-6)/2] 33    27 \right\}^{-1} \\  28 a = R_{g}^2/(3D_m/2) \\  29 b = r_{g}^2/[-3(D_s+D_m-6)/2] \\ 34 30 scale = scale\_factor \times NV^2 (\rho_{particle} - \rho_{solvent})^2 35 31  … …  89 85 source = ["mass_surface_fractal.c"] 90 86   87 def random():  88 import numpy as np  89 fractal_dim = np.random.uniform(0, 6)  90 surface_portion = np.random.uniform(0, 1)  91 fractal_dim_surf = fractal_dim*surface_portion  92 fractal_dim_mass = fractal_dim - fractal_dim_surf  93 rg_cluster = 10**np.random.uniform(1, 5)  94 rg_primary = rg_cluster*10**np.random.uniform(-4, -1)  95 scale = 10**np.random.uniform(2, 5)  96 pars = dict(  97 #background=0,  98 scale=scale,  99 fractal_dim_mass=fractal_dim_mass,  100 fractal_dim_surf=fractal_dim_surf,  101 rg_cluster=rg_cluster,  102 rg_primary=rg_primary,  103 )  104 return pars  105   106  91 107 demo = dict(scale=1, background=0, 92 108 fractal_dim_mass=1.8, -
sasmodels/models/mono_gauss_coil.py
r3330bb4 rca04add  24 24  25 25 I_0 &= \phi_\text{poly} \cdot V 26  \cdot (\rho_\text{poly} - \rho_\text{solv})^2 27   28  P(q) &= 2 [\exp(-Z) + Z - 1] / Z^2 29   30  Z &= (q R_g)^2 31    26 \cdot (\rho_\text{poly} - \rho_\text{solv})^2 \\  27 P(q) &= 2 [\exp(-Z) + Z - 1] / Z^2 \\  28 Z &= (q R_g)^2 \\ 32 29 V &= M / (N_A \delta) 33 30  … …  62 59  63 60 description = """ 64  Evaluates the scattering from  61 Evaluates the scattering from 65 62 monodisperse polymer chains. 66 63 """ … …  86 83 Iq.vectorized = True # Iq accepts an array of q values 87 84   85 def random():  86 import numpy as np  87 rg = 10**np.random.uniform(0, 4)  88 #rg = 1e3  89 pars = dict(  90 #scale=1, background=0,  91 i_zero=1e7, # i_zero is a simple scale  92 rg=rg,  93 )  94 return pars  95  88 96 demo = dict(scale=1.0, i_zero=70.0, rg=75.0, background=0.0) 89 97  -
sasmodels/models/multilayer_vesicle.py
r870a2f4 r64eecf7  33 33 .. math:: 34 34  35  r_i &= r_c + (i-1)(t_s + t_w) &&\text{ solvent radius before shell } i \\Â36  R_i &= r_i + t_s &&\text{ shell radius for shell } i 35 r_i &= r_c + (i-1)(t_s + t_w) \text{ solvent radius before shell } i \\  36 R_i &= r_i + t_s \text{ shell radius for shell } i 37 37  38 38 $\phi$ is the volume fraction of particles, $V(r)$ is the volume of a sphere … …  150 150 return radius + n_shells * (thick_shell + thick_solvent) - thick_solvent 151 151  152  demo = dict(scale=1, background=0, 153  volfraction=0.05, 154  radius=60.0, 155  thick_shell=10.0, 156  thick_solvent=10.0, 157  sld_solvent=6.4, 158  sld=0.4, 159  n_shells=2.0)  152 def random():  153 import numpy as np  154 volfraction = 10**np.random.uniform(-3, -0.5) # scale from 0.1% to 30%  155 radius = 10**np.random.uniform(0, 2.5) # core less than 300 A  156 total_thick = 10**np.random.uniform(2, 4) # up to 10000 A of shells  157 # random number of shells, with shell+solvent thickness > 10 A  158 n_shells = int(10**np.random.uniform(0, np.log10(total_thick)-1)+0.5)  159 # split total shell thickness with preference for shell over solvent;  160 # make sure that shell thickness is at least 1 A  161 one_thick = total_thick/n_shells  162 thick_solvent = 10**np.random.uniform(-2, 0)*(one_thick - 1)  163 thick_shell = one_thick - thick_solvent  164 pars = dict(  165 scale=1,  166 volfraction=volfraction,  167 radius=radius,Â