# Changeset afe206d in sasmodels

Ignore:
Timestamp:
Sep 25, 2018 9:41:04 AM (8 months ago)
Branches:
master, core_shell_microgels, magnetic_model, ticket-1257-vesicle-product, ticket_1156, ticket_1265_superball, ticket_822_more_unit_tests
Children:
ce1eed5
Parents:
12eec1e (diff), 2015f02 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into ticket-1142-plugin-reload

Files:
26 edited

Unmodified
Added
Removed
• ## doc/guide/pd/polydispersity.rst

 rd712a0f .. _polydispersityhelp: Polydispersity Distributions ---------------------------- With some models in sasmodels we can calculate the average intensity for a population of particles that exhibit size and/or orientational polydispersity. The resultant intensity is normalized by the average particle volume such that Polydispersity & Orientational Distributions -------------------------------------------- For some models we can calculate the average intensity for a population of particles that possess size and/or orientational (ie, angular) distributions. In SasView we call the former *polydispersity* but use the parameter *PD* to parameterise both. In other words, the meaning of *PD* in a model depends on the actual parameter it is being applied too. The resultant intensity is then normalized by the average particle volume such that .. math:: where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an average over the size distribution $f(x; \bar x, \sigma)$, giving average over the distribution $f(x; \bar x, \sigma)$, giving .. math:: Each distribution is characterized by a center value $\bar x$ or $x_\text{med}$, a width parameter $\sigma$ (note this is *not necessarily* the standard deviation, so read the description carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The meaning of a polydispersity parameter *PD* (not to be confused with a molecular weight distributions in polymer science) in a model depends on the type of parameter it is being applied too. the standard deviation, so read the description of the distribution carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The distribution width applied to *volume* (ie, shape-describing) parameters is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. However, the distribution width applied to *orientation* (ie, angle-describing) parameters is just $\sigma = \mathrm{PD}$. However, the distribution width applied to *orientation* parameters is just $\sigma = \mathrm{PD}$. $N_\sigma$ determines how far into the tails to evaluate the distribution, Users should note that the averaging computation is very intensive. Applying polydispersion to multiple parameters at the same time or increasing the number of points in the distribution will require patience! However, the calculations are generally more robust with more data points or more angles. polydispersion and/or orientational distributions to multiple parameters at the same time, or increasing the number of points in the distribution, will require patience! However, the calculations are generally more robust with more data points or more angles. The following distribution functions are provided: Additional distributions are under consideration. **Beware: when the Polydispersity & Orientational Distribution panel in SasView is** **first opened, the default distribution for all parameters is the Gaussian Distribution.** **This may not be suitable. See Suggested Applications below.** .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace the term 'polydispersity' (see Pure Appl. Chem., (2009), 81(2), 351-353 _ in order to make the terminology describing distributions of properties unambiguous. Throughout the SasView documentation we continue to use the term polydispersity because one of the consequences of the IUPAC change is that orientational polydispersity would not meet their new criteria (which requires dispersity to be dimensionless). in order to make the terminology describing distributions of chemical properties unambiguous. However, these terms are unrelated to the proportional size distributions and orientational distributions used in SasView models. Suggested Applications ^^^^^^^^^^^^^^^^^^^^^^ If applying polydispersion to parameters describing particle sizes, use If applying polydispersion to parameters describing particle sizes, consider using the Lognormal or Schulz distributions. If applying polydispersion to parameters describing interfacial thicknesses or angular orientations, use the Gaussian or Boltzmann distributions. or angular orientations, consider using the Gaussian or Boltzmann distributions. If applying polydispersion to parameters describing angles, use the Uniform ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Many commercial Dynamic Light Scattering (DLS) instruments produce a size polydispersity parameter, sometimes even given the symbol $p$\ ! This parameter is defined as the relative standard deviation coefficient of variation of the size distribution and is NOT the same as the polydispersity parameters in the Lognormal and Schulz distributions above (though they all related) except when the DLS polydispersity parameter is <0.13. .. math:: p_{DLS} = \sqrt(\nu / \bar x^2) where $\nu$ is the variance of the distribution and $\bar x$ is the mean value of $x$. Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and it should not be assumed that any of the following can be simply equated with the polydispersity *PD* parameter used in SasView. The dimensionless **Polydispersity Index (PI)** is a measure of the width of the distribution of autocorrelation function decay rates (*not* the distribution of particle sizes itself, though the two are inversely related) and is defined by ISO 22412:2017 as .. math:: PI = \mu_{2} / \bar \Gamma^2 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the intensity-weighted average value, of the distribution of decay rates. *If the distribution of decay rates is Gaussian* then .. math:: PI = \sigma^2 / 2\bar \Gamma^2 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** to be defined as .. math:: RP = \sigma / \bar \Gamma = \sqrt{2 \cdot PI} PI values smaller than 0.05 indicate a highly monodisperse system. Values greater than 0.7 indicate significant polydispersity. The **size polydispersity P-parameter** is defined as the relative standard deviation coefficient of variation .. math:: P = \sqrt\nu / \bar R where $\nu$ is the variance of the distribution and $\bar R$ is the mean value of $R$. Here, the product $P \bar R$ is *equal* to the standard deviation of the Lognormal distribution. P values smaller than 0.13 indicate a monodisperse system. For more information see: S King, C Washington & R Heenan, *Phys Chem Chem Phys*, (2005), 7, 143 ISO 22412:2017, International Standards Organisation (2017) _. Polydispersity: What does it mean for DLS and Chromatography _. Dynamic Light Scattering: Common Terms Defined, Whitepaper WP111214. Malvern Instruments (2011) _. S King, C Washington & R Heenan, *Phys Chem Chem Phys*, (2005), 7, 143. T Allen, in *Particle Size Measurement*, 4th Edition, Chapman & Hall, London (1990). .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ | 2018-03-20 Steve King | 2018-04-04 Steve King | 2018-08-09 Steve King
• ## doc/guide/plugin.rst

 rf796469 calculations, but instead rely on numerical integration to compute the appropriately smeared pattern. Each .py file also contains a function:: def random(): ... This function provides a model-specific random parameter set which shows model features in the USANS to SANS range.  For example, core-shell sphere sets the outer radius of the sphere logarithmically in [20, 20,000], which sets the Q value for the transition from flat to falling.  It then uses a beta distribution to set the percentage of the shape which is shell, giving a preference for very thin or very thick shells (but never 0% or 100%).  Using -sets=10 in sascomp should show a reasonable variety of curves over the default sascomp q range. The parameter set is returned as a dictionary of {parameter: value, ...}. Any model parameters not included in the dictionary will default according to the code in the _randomize_one() function from sasmodels/compare.py. Python Models
• ## doc/guide/scripting.rst

 r4aa5dce The key functions are :func:sasmodels.core.load_model for loading the model definition and compiling the kernel and :func:sasmodels.data.load_data for calling sasview to load the data. Need the data because that defines the resolution function and the q values to evaluate. If there is no data, then use :func:sasmodels.data.empty_data1D or :func:sasmodels.data.empty_data2D to create some data with a given $q$. Using sasmodels through bumps ============================= With the data and the model, you can wrap it in a *bumps* model with :func:sasmodels.data.load_data for calling sasview to load the data. Preparing data ============== Usually you will load data via the sasview loader, with the :func:sasmodels.data.load_data function.  For example:: from sasmodels.data import load_data data = load_data("sasmodels/example/093191_201.dat") You may want to apply a data mask, such a beam stop, and trim high $q$:: from sasmodels.data import set_beam_stop set_beam_stop(data, qmin, qmax) The :func:sasmodels.data.set_beam_stop method simply sets the *mask* attribute for the data. The data defines the resolution function and the q values to evaluate, so even if you simulating experiments prior to making measurements, you still need a data object for reference. Use :func:sasmodels.data.empty_data1D or :func:sasmodels.data.empty_data2D to create a container with a given $q$ and $\Delta q/q$.  For example:: import numpy as np from sasmodels.data import empty_data1D # 120 points logarithmically spaced from 0.005 to 0.2, with dq/q = 5% q = np.logspace(np.log10(5e-3), np.log10(2e-1), 120) data = empty_data1D(q, resolution=0.05) To use a more realistic model of resolution, or to load data from a file format not understood by SasView, you can use :class:sasmodels.data.Data1D or :class:sasmodels.data.Data2D directly.  The 1D data uses *x*, *y*, *dx* and *dy* for $x = q$ and $y = I(q)$, and 2D data uses *x*, *y*, *z*, *dx*, *dy*, *dz* for $x, y = qx, qy$ and $z = I(qx, qy)$. [Note: internally, the Data2D object uses SasView conventions, *qx_data*, *qy_data*, *data*, *dqx_data*, *dqy_data*, and *err_data*.] For USANS data, use 1D data, but set *dxl* and *dxw* attributes to indicate slit resolution:: data.dxl = 0.117 See :func:sasmodels.resolution.slit_resolution for details. SESANS data is more complicated; if your SESANS format is not supported by SasView you need to define a number of attributes beyond *x*, *y*.  For example:: SElength = np.linspace(0, 2400, 61) # [A] data = np.ones_like(SElength) err_data = np.ones_like(SElength)*0.03 class Source: wavelength = 6 # [A] wavelength_unit = "A" class Sample: zacceptance = 0.1 # [A^-1] thickness = 0.2 # [cm] class SESANSData1D: #q_zmax = 0.23 # [A^-1] lam = 0.2 # [nm] x = SElength y = data dy = err_data sample = Sample() data = SESANSData1D() x, y = ... # create or load sesans data = smd.Data The *data* module defines various data plotters as well. Using sasmodels directly ======================== Once you have a computational kernel and a data object, you can evaluate the model for various parameters using :class:sasmodels.direct_model.DirectModel.  The resulting object *f* will be callable as *f(par=value, ...)*, returning the $I(q)$ for the $q$ values in the data.  For example:: import numpy as np from sasmodels.data import empty_data1D from sasmodels.core import load_model from sasmodels.direct_model import DirectModel # 120 points logarithmically spaced from 0.005 to 0.2, with dq/q = 5% q = np.logspace(np.log10(5e-3), np.log10(2e-1), 120) data = empty_data1D(q, resolution=0.05) kernel = load_model("ellipsoid) f = DirectModel(data, kernel) Iq = f(radius_polar=100) Polydispersity information is set with special parameter names: * *par_pd* for polydispersity width, $\Delta p/p$, * *par_pd_n* for the number of points in the distribution, * *par_pd_type* for the distribution type (as a string), and * *par_pd_nsigmas* for the limits of the distribution. Using sasmodels through the bumps optimizer =========================================== Like DirectModel, you can wrap data and a kernel in a *bumps* model with class:sasmodels.bumps_model.Model and create an class:sasmodels.bump_model.Experiment that you can fit with the *bumps* class:sasmodels.bumps_model.Experiment that you can fit with the *bumps* interface. Here is an example from the *example* directory such as *example/model.py*:: SasViewCom bumps.cli example/model.py --preview Using sasmodels directly ======================== Bumps has a notion of parameter boxes in which you can set and retrieve values.  Instead of using bumps, you can create a directly callable function with :class:sasmodels.direct_model.DirectModel.  The resulting object *f* will be callable as *f(par=value, ...)*, returning the $I(q)$ for the $q$ values in the data.  Polydisperse parameters use the same naming conventions as in the bumps model, with e.g., radius_pd being the polydispersity associated with radius. Calling the computation kernel ============================== Getting a simple function that you can call on a set of q values and return
• ## doc/rst_prolog

 r30b60d2 .. |Ang^-3| replace:: |Ang|\ :sup:-3 .. |Ang^-4| replace:: |Ang|\ :sup:-4 .. |nm^-1| replace:: nm\ :sup:-1 .. |cm^-1| replace:: cm\ :sup:-1 .. |cm^2| replace:: cm\ :sup:2
• ## sasmodels/compare.py

 raf7a97c if opts['datafile'] is not None: data = load_data(os.path.expanduser(opts['datafile'])) data0 = load_data(os.path.expanduser(opts['datafile'])) data = data0, data0 else: # Hack around the fact that make_data doesn't take a pair of resolutions
• ## sasmodels/core.py

 r4341dd4 if not callable(model_info.Iq): source = generate.make_source(model_info)['dll'] old_path = kerneldll.DLL_PATH old_path = kerneldll.SAS_DLL_PATH try: kerneldll.DLL_PATH = path kerneldll.SAS_DLL_PATH = path dll = kerneldll.make_dll(source, model_info, dtype=numpy_dtype) finally: kerneldll.DLL_PATH = old_path kerneldll.SAS_DLL_PATH = old_path compiled_dlls.append(dll) return compiled_dlls
• ## sasmodels/data.py

 r581661f *x* is spin echo length and *y* is polarization (P/P0). """ isSesans = True def __init__(self, **kw): Data1D.__init__(self, **kw) self.wavelength_unit = "A" class Sample(object): """ Sample attributes. """ def __init__(self): # type: () -> None pass def empty_data1D(q, resolution=0.0, L=0., dL=0.):
• ## sasmodels/direct_model.py

 r1a8c11c from . import resolution2d from .details import make_kernel_args, dispersion_mesh from .modelinfo import DEFAULT_BACKGROUND # pylint: disable=unused-import # Need to pull background out of resolution for multiple scattering background = pars.get('background', 0.) background = pars.get('background', DEFAULT_BACKGROUND) pars = pars.copy() pars['background'] = 0.
• ## sasmodels/generate.py

 rd86f0fc docs = model_info.docs if model_info.docs is not None else "" docs = convert_section_titles_to_boldface(docs) pars = make_partable(model_info.parameters.COMMON + model_info.parameters.kernel_parameters) if model_info.structure_factor: pars = model_info.parameters.kernel_parameters else: pars = model_info.parameters.COMMON + model_info.parameters.kernel_parameters partable = make_partable(pars) subst = dict(id=model_info.id.replace('_', '-'), name=model_info.name, title=model_info.title, parameters=pars, parameters=partable, returns=Sq_units if model_info.structure_factor else Iq_units, docs=docs)
• ## sasmodels/kernel_iq.c

 r7c35fda out_spin = clip(out_spin, 0.0, 1.0); // Previous version of this function took the square root of the weights, // under the assumption that // under the assumption that // //     w*I(q, rho1, rho2, ...) = I(q, sqrt(w)*rho1, sqrt(w)*rho2, ...) QACRotation *rotation, double qx, double qy, double *qa_out, double *qc_out) double *qab_out, double *qc_out) { // Indirect calculation of qab, from qab^2 = |q|^2 - qc^2 const double dqc = rotation->R31*qx + rotation->R32*qy; // Indirect calculation of qab, from qab^2 = |q|^2 - qc^2 const double dqa = sqrt(-dqc*dqc + qx*qx + qy*qy); *qa_out = dqa; const double dqab_sq = -dqc*dqc + qx*qx + qy*qy; //*qab_out = sqrt(fabs(dqab_sq)); *qab_out = dqab_sq > 0.0 ? sqrt(dqab_sq) : 0.0; *qc_out = dqc; }
• ## sasmodels/modelinfo.py

 r95498a3 # Note that scale and background cannot be coordinated parameters whose value # depends on the some polydisperse parameter with the current implementation DEFAULT_BACKGROUND = 1e-3 COMMON_PARAMETERS = [ ("scale", "", 1, (0.0, np.inf), "", "Source intensity"), ("background", "1/cm", 1e-3, (-np.inf, np.inf), "", "Source background"), ("background", "1/cm", DEFAULT_BACKGROUND, (-np.inf, np.inf), "", "Source background"), ] assert (len(COMMON_PARAMETERS) == 2 Parameter('up:frac_f', '', 0., [0., 1.], 'magnetic', 'fraction of spin up final'), Parameter('up:angle', 'degress', 0., [0., 360.], Parameter('up:angle', 'degrees', 0., [0., 360.], 'magnetic', 'spin up angle'), ])
• ## sasmodels/models/_spherepy.py

 r108e70e r""" For information about polarised and magnetic scattering, see the :doc:magnetic help <../sasgui/perspectives/fitting/mag_help> documentation. the :ref:magnetism documentation. Definition
• ## sasmodels/models/core_shell_cylinder.py

 r2d81cfe The output of the 2D scattering intensity function for oriented core-shell cylinders is given by (Kline, 2006 [#kline]_). The form factor is normalized by the particle volume. by the particle volume. Note that in this model the shell envelops the entire core so that besides a "sleeve" around the core, the shell also provides two flat end caps of thickness = shell thickness. In other words the length of the total cyclinder is the length of the core cylinder plus twice the thickness of the shell. If no end caps are desired one should use the :ref:core-shell-bicelle and set the thickness of the end caps (in this case the "thick_face") to zero. .. math:: and $\alpha$ is the angle between the axis of the cylinder and $\vec q$, $V_s$ is the volume of the outer shell (i.e. the total volume, including the shell), $V_c$ is the volume of the core, $L$ is the length of the core, $V_s$ is the total volume (i.e. including both the core and the outer shell), $V_c$ is the volume of the core, $L$ is the length of the core, $R$ is the radius of the core, $T$ is the thickness of the shell, $\rho_c$ is the scattering length density of the core, $\rho_s$ is the scattering return 0.5 * (ddd) ** (1. / 3.) def VR(radius, thickness, length): """ Returns volume ratio """ whole = pi * (radius + thickness) ** 2 * (length + 2 * thickness) core = pi * radius ** 2 * length return whole, whole - core def random(): outer_radius = 10**np.random.uniform(1, 4.7)
• ## sasmodels/models/core_shell_sphere.py

 rdc76240 return radius + thickness def VR(radius, thickness): """ Volume ratio @param radius: core radius @param thickness: shell thickness """ return (1, 1) whole = 4.0/3.0 * pi * (radius + thickness)**3 core = 4.0/3.0 * pi * radius**3 return whole, whole - core def random(): outer_radius = 10**np.random.uniform(1.3, 4.3) tests = [ [{'radius': 20.0, 'thickness': 10.0}, 'ER', 30.0], # TODO: VR test suppressed until we sort out new product model # and determine what to do with volume ratio. #[{'radius': 20.0, 'thickness': 10.0}, 'VR', 0.703703704], # The SasView test result was 0.00169, with a background of 0.001
• ## sasmodels/models/ellipsoid.py

 r2d81cfe import numpy as np from numpy import inf, sin, cos, pi try: from numpy import cbrt except ImportError: def cbrt(x): return x ** (1.0/3.0) name = "ellipsoid" idx = radius_polar < radius_equatorial ee[idx] = (radius_equatorial[idx] ** 2 - radius_polar[idx] ** 2) / radius_equatorial[idx] ** 2 idx = radius_polar == radius_equatorial ee[idx] = 2 * radius_polar[idx] valid = (radius_polar * radius_equatorial != 0) valid = (radius_polar * radius_equatorial != 0) & (radius_polar != radius_equatorial) bd = 1.0 - ee[valid] e1 = np.sqrt(ee[valid]) b2 = 1.0 + bd / 2 / e1 * np.log(bL) delta = 0.75 * b1 * b2 ddd = np.zeros_like(radius_polar) ddd[valid] = 2.0 * (delta + 1.0) * radius_polar * radius_equatorial ** 2 return 0.5 * ddd ** (1.0 / 3.0) ddd = 2.0 * (delta + 1.0) * (radius_polar * radius_equatorial**2)[valid] r = np.zeros_like(radius_polar) r[valid] = 0.5 * cbrt(ddd) idx = radius_polar == radius_equatorial r[idx] = radius_polar[idx] return r def random():
• ## sasmodels/models/fractal_core_shell.py

 ref07e95 ["sld_shell",   "1e-6/Ang^2", 2.0,  [-inf, inf], "sld",    "Sphere shell scattering length density"], ["sld_solvent", "1e-6/Ang^2", 3.0,  [-inf, inf], "sld",    "Solvent scattering length density"], ["volfraction", "",           1.0,  [0.0, inf],  "",       "Volume fraction of building block spheres"], ["volfraction", "",           0.05,  [0.0, inf],  "",       "Volume fraction of building block spheres"], ["fractal_dim",    "",        2.0,  [0.0, 6.0],  "",       "Fractal dimension"], ["cor_length",  "Ang",      100.0,  [0.0, inf],  "",       "Correlation length of fractal-like aggregates"], return radius + thickness def VR(radius, thickness): """ Volume ratio @param radius: core radius @param thickness: shell thickness """ whole = 4.0/3.0 * pi * (radius + thickness)**3 core = 4.0/3.0 * pi * radius**3 return whole, whole-core tests = [[{'radius': 20.0, 'thickness': 10.0}, 'ER', 30.0], tests = [[{'radius': 20.0, 'thickness': 10.0}, 'ER', 30.0], [{'radius': 20.0, 'thickness': 10.0}, 'VR', 0.703703704]] #         # The SasView test result was 0.00169, with a background of 0.001 #         # They are however wrong as we now know.  IGOR might be a more #         # appropriate source. Otherwise will just have to assume this is now #         # correct and self generate a correct answer for the future. Until we #         # figure it out leave the tests commented out #         [{'radius': 60.0, #           'thickness': 10.0, #           'sld_core': 1.0, #           'sld_shell': 2.0, #           'sld_solvent': 3.0, #           'background': 0.0 #          }, 0.015211, 692.84]] #         # At some point the SasView 3.x test result was deemed incorrect. The #following tests were verified against NIST IGOR macros ver 7.850. #NOTE: NIST macros do only provide for a polydispers core (no option #for a poly shell or for a monodisperse core.  The results seemed #extremely sensitive to the core PD, varying non monotonically all #the way to a PD of 1e-6. From 1e-6 to 1e-9 no changes in the #results were observed and the values below were taken using PD=1e-9. #Non-monotonically = I(0.001)=188 to 140 to 177 back to 160 etc. [{'radius': 20.0, 'thickness': 5.0, 'sld_core': 3.5, 'sld_shell': 1.0, 'sld_solvent': 6.35, 'volfraction': 0.05, 'background': 0.0}, [0.001,0.00291,0.0107944,0.029923,0.100726,0.476304], [177.146,165.151,84.1596,20.1466,1.40906,0.00622666]]]
• ## sasmodels/models/hollow_cylinder.py

 r2d81cfe r""" Definition ---------- This model provides the form factor, $P(q)$, for a monodisperse hollow right angle circular cylinder (rigid tube) where the form factor is normalized by the volume of the tube (i.e. not by the external volume). angle circular cylinder (rigid tube) where the The inside and outside of the hollow cylinder are assumed to have the same SLD and the form factor is thus normalized by the volume of the tube (i.e. not by the total cylinder volume). .. math:: P(q) = \text{scale} \left/V_\text{shell} + \text{background} where the averaging $\left<\ldots\right>$ is applied only for the 1D calculation. where the averaging $\left<\ldots\right>$ is applied only for the 1D calculation. If Intensity is given on an absolute scale, the scale factor here is the volume fraction of the shell.  This differs from the :ref:core-shell-cylinder in that, in that case, scale is the volume fraction of the entire cylinder (core+shell). The application might be for a bilayer which wraps into a hollow tube and the volume fraction of material is all in the shell, whereas the :ref:core-shell-cylinder model might be used for a cylindrical micelle where the tails in the core have a different SLD than the headgroups (in the shell) and the volume fraction of material comes fromm the whole cyclinder.  NOTE: the hollow_cylinder represents a tube whereas the core_shell_cylinder includes a shell layer covering the ends (end caps) as well. The inside and outside of the hollow cylinder are assumed have the same SLD. Definition ---------- The 1D scattering intensity is calculated in the following way (Guinier, 1955) ---------- L A Feigin and D I Svergun, *Structure Analysis by Small-Angle X-Ray and Neutron Scattering*, Plenum Press, New York, (1987) .. [#] L A Feigin and D I Svergun, *Structure Analysis by Small-Angle X-Ray and Neutron Scattering*, Plenum Press, New York, (1987) Authorship and Verification * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Richard Heenan **Date:** October 06, 2016 (reparametrised to use thickness, not outer radius) * **Last Reviewed by:** Richard Heenan **Date:** October 06, 2016 * **Last Modified by:** Paul Butler **Date:** September 06, 2018 (corrected VR calculation) * **Last Reviewed by:** Paul Butler **Date:** September 06, 2018 """ vol_total = pi*router*router*length vol_shell = vol_total - vol_core return vol_shell, vol_total return vol_total, vol_shell def random(): tests = [ [{}, 0.00005, 1764.926], [{}, 'VR', 1.8], [{}, 'VR', 0.55555556], [{}, 0.001, 1756.76], [{}, (qx, qy), 2.36885476192],
• ## sasmodels/models/hollow_rectangular_prism.py

 r0e55afe # Note: model title and parameter table are inserted automatically r""" This model provides the form factor, $P(q)$, for a hollow rectangular parallelepiped with a wall of thickness $\Delta$. Definition ---------- The 1D scattering intensity for this model is calculated by forming the difference of the amplitudes of two massive parallelepipeds differing in their outermost dimensions in each direction by the same length increment $2\Delta$ (Nayuk, 2012). This model provides the form factor, $P(q)$, for a hollow rectangular parallelepiped with a wall of thickness $\Delta$. The 1D scattering intensity for this model is calculated by forming the difference of the amplitudes of two massive parallelepipeds differing in their outermost dimensions in each direction by the same length increment $2\Delta$ (\ [#Nayuk2012]_ Nayuk, 2012). As in the case of the massive parallelepiped model (:ref:rectangular-prism), \rho_\text{solvent})^2 \times P(q) + \text{background} where $\rho_\text{p}$ is the scattering length of the parallelepiped, $\rho_\text{solvent}$ is the scattering length of the solvent, where $\rho_\text{p}$ is the scattering length density of the parallelepiped, $\rho_\text{solvent}$ is the scattering length density of the solvent, and (if the data are in absolute units) *scale* represents the volume fraction (which is unitless). (which is unitless) of the rectangular shell of material (i.e. not including the volume of the solvent filled core). For 2d data the orientation of the particle is required, described using For 2d, constraints must be applied during fitting to ensure that the inequality $A < B < C$ is not violated, and hence the correct definition of angles is preserved. The calculation will not report an error, but the results may be not correct. $A < B < C$ is not violated, and hence the correct definition of angles is preserved. The calculation will not report an error if the inequality is *not* preserved, but the results may be not correct. .. figure:: img/parallelepiped_angle_definition.png ---------- R Nayuk and K Huber, *Z. Phys. Chem.*, 226 (2012) 837-854 .. [#Nayuk2012] R Nayuk and K Huber, *Z. Phys. Chem.*, 226 (2012) 837-854 Authorship and Verification ---------------------------- * **Author:** Miguel Gonzales **Date:** February 26, 2016 * **Last Modified by:** Paul Kienzle **Date:** December 14, 2017 * **Last Reviewed by:** Paul Butler **Date:** September 06, 2018 """
• ## sasmodels/models/hollow_rectangular_prism_thin_walls.py

 r2d81cfe # Note: model title and parameter table are inserted automatically r""" Definition ---------- This model provides the form factor, $P(q)$, for a hollow rectangular prism with infinitely thin walls. It computes only the 1D scattering, not the 2D. Definition ---------- The 1D scattering intensity for this model is calculated according to the equations given by Nayuk and Huber (Nayuk, 2012). equations given by Nayuk and Huber\ [#Nayuk2012]_. Assuming a hollow parallelepiped with infinitely thin walls, edge lengths I(q) = \text{scale} \times V \times (\rho_\text{p} - \rho_\text{solvent})^2 \times P(q) where $V$ is the volume of the rectangular prism, $\rho_\text{p}$ is the scattering length of the parallelepiped, $\rho_\text{solvent}$ is the scattering length of the solvent, and (if the data are in absolute units) *scale* represents the volume fraction (which is unitless). where $V$ is the surface area of the rectangular prism, $\rho_\text{p}$ is the scattering length density of the parallelepiped, $\rho_\text{solvent}$ is the scattering length density of the solvent, and (if the data are in absolute units) *scale* is related to the total surface area. **The 2D scattering intensity is not computed by this model.** Validation of the code was conducted  by qualitatively comparing the output of the 1D model to the curves shown in (Nayuk, 2012). of the 1D model to the curves shown in (Nayuk, 2012\ [#Nayuk2012]_). ---------- R Nayuk and K Huber, *Z. Phys. Chem.*, 226 (2012) 837-854 .. [#Nayuk2012] R Nayuk and K Huber, *Z. Phys. Chem.*, 226 (2012) 837-854 Authorship and Verification ---------------------------- * **Author:** Miguel Gonzales **Date:** February 26, 2016 * **Last Modified by:** Paul Kienzle **Date:** October 15, 2016 * **Last Reviewed by:** Paul Butler **Date:** September 07, 2018 """
• ## sasmodels/models/spherical_sld.py

 r2d81cfe r""" Definition ---------- Similarly to the onion, this model provides the form factor, $P(q)$, for a multi-shell sphere, where the interface between the each neighboring interface. The form factor is normalized by the total volume of the sphere. Interface shapes are as follows:: Interface shapes are as follows: 0: erf($\nu z$) 1: Rpow($z^\nu$) 2: Lpow($z^\nu$) 3: Rexp($-\nu z$) 4: Lexp($-\nu z$) Definition ---------- The form factor $P(q)$ in 1D is calculated by: when $P(Q) * S(Q)$ is applied. References ---------- L A Feigin and D I Svergun, Structure Analysis by Small-Angle X-Ray and Neutron Scattering, Plenum Press, New York, (1987) .. [#] L A Feigin and D I Svergun, Structure Analysis by Small-Angle X-Ray and Neutron Scattering, Plenum Press, New York, (1987) Authorship and Verification ---------------------------- * **Author:** Jae-Hie Cho **Date:** Nov 1, 2010 * **Last Modified by:** Paul Kienzle **Date:** Dec 20, 2016 * **Last Reviewed by:** Paul Butler **Date:** September 8, 2018 """
• ## sasmodels/models/spinodal.py

 ref07e95 ---------- This model calculates the SAS signal of a phase separating solution under spinodal decomposition. The scattering intensity $I(q)$ is calculated as This model calculates the SAS signal of a phase separating system undergoing spinodal decomposition. The scattering intensity $I(q)$ is calculated as .. math:: I(q) = I_{max}\frac{(1+\gamma/2)x^2}{\gamma/2+x^{2+\gamma}}+B where $x=q/q_0$ and $B$ is a flat background. The characteristic structure length scales with the correlation peak at $q_0$. The exponent $\gamma$ is equal to $d+1$ with d the dimensionality of the off-critical concentration mixtures. A transition to $\gamma=2d$ is seen near the percolation threshold into the critical concentration regime. where $x=q/q_0$, $q_0$ is the peak position, $I_{max}$ is the intensity at $q_0$ (parameterised as the $scale$ parameter), and $B$ is a flat background. The spinodal wavelength is given by $2\pi/q_0$. The exponent $\gamma$ is equal to $d+1$ for off-critical concentration mixtures (smooth interfaces) and $2d$ for critical concentration mixtures (entangled interfaces), where $d$ is the dimensionality (ie, 1, 2, 3) of the system. Thus 2 <= $\gamma$ <= 6. A transition from $\gamma=d+1$ to $\gamma=2d$ is expected near the percolation threshold. As this function tends to zero as $q$ tends to zero, in practice it may be necessary to combine it with another function describing the low-angle scattering, or to simply omit the low-angle scattering from the fit. References Physica A 123,497 (1984). Authorship and Verification ---------------------------- Revision History ---------------- * **Author:** Dirk Honecker **Date:** Oct 7, 2016 * **Author:**  Dirk Honecker **Date:** Oct 7, 2016 * **Revised:** Steve King    **Date:** Sep 7, 2018 """ title = "Spinodal decomposition model" description = """\ I(q) = scale ((1+gamma/2)x^2)/(gamma/2+x^(2+gamma))+background I(q) = Imax ((1+gamma/2)x^2)/(gamma/2+x^(2+gamma)) + background List of default parameters: scale = scaling gamma = exponent x = q/q_0 Imax = correlation peak intensity at q_0 background = incoherent background gamma = exponent (see model documentation) q_0 = correlation peak position [1/A] background = Incoherent background""" x = q/q_0""" category = "shape-independent"
• ## sasmodels/models/vesicle.py

 ref07e95 ---------- The 1D scattering intensity is calculated in the following way (Guinier, 1955) This model provides the form factor, *P(q)*, for an unilamellar vesicle and is effectively identical to the hollow sphere reparameterized to be more intuitive for a vesicle and normalizing the form factor by the volume of the shell. The 1D scattering intensity is calculated in the following way (Guinier,1955\ [#Guinier1955]_) .. math:: ---------- A Guinier and G. Fournet, *Small-Angle Scattering of X-Rays*, John Wiley and Sons, New York, (1955) .. [#Guinier1955] A Guinier and G. Fournet, *Small-Angle Scattering of X-Rays*, John Wiley and Sons, New York, (1955) Authorship and Verification ---------------------------- * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Paul Butler **Date:** March 20, 2016 * **Last Reviewed by:** Paul Butler **Date:** March 20, 2016 * **Last Reviewed by:** Paul Butler **Date:** September 7, 2018 """ name = "vesicle" title = "This model provides the form factor, *P(q)*, for an unilamellar \ vesicle. This is model is effectively identical to the hollow sphere \ reparameterized to be more intuitive for a vesicle and normalizing the \ form factor by the volume of the shell." title = "Vesicle model representing a hollow sphere" description = """ Model parameters:
• ## sasmodels/resolution.py

 r0b9c6df MINIMUM_RESOLUTION = 1e-8 MINIMUM_ABSOLUTE_Q = 0.02  # relative to the minimum q in the data PINHOLE_N_SIGMA = 2.5 # From: Barker & Pedersen 1995 JAC # According to (Barker & Pedersen 1995 JAC), 2.5 sigma is a good limit. # According to simulations with github.com:scattering/sansresolution.git # it is better to use asymmetric bounds (2.5, 3.0) PINHOLE_N_SIGMA = (2.5, 3.0) class Resolution(object): # from the geometry, they may appear since we are using a truncated # gaussian to represent resolution rather than a skew distribution. cutoff = MINIMUM_ABSOLUTE_Q*np.min(self.q) self.q_calc = self.q_calc[self.q_calc >= cutoff] #cutoff = MINIMUM_ABSOLUTE_Q*np.min(self.q) #self.q_calc = self.q_calc[self.q_calc >= cutoff] # Build weight matrix from calculated q values cdf = erf((edges[:, None] - q[None, :]) / (sqrt(2.0)*q_width)[None, :]) weights = cdf[1:] - cdf[:-1] # Limit q range to +/- 2.5 sigma qhigh = q + nsigma*q_width #qlow = q - nsigma*q_width  # linear limits qlow = q*q/qhigh  # log limits # Limit q range to (-2.5,+3) sigma try: nsigma_low, nsigma_high = nsigma except TypeError: nsigma_low = nsigma_high = nsigma qhigh = q + nsigma_high*q_width qlow = q - nsigma_low*q_width  # linear limits ##qlow = q*q/qhigh  # log limits weights[q_calc[:, None] < qlow[None, :]] = 0. weights[q_calc[:, None] > qhigh[None, :]] = 0. def pinhole_extend_q(q, q_width, nsigma=3): def pinhole_extend_q(q, q_width, nsigma=PINHOLE_N_SIGMA): """ Given *q* and *q_width*, find a set of sampling points *q_calc* so function. """ q_min, q_max = np.min(q - nsigma*q_width), np.max(q + nsigma*q_width) try: nsigma_low, nsigma_high = nsigma except TypeError: nsigma_low = nsigma_high = nsigma q_min, q_max = np.min(q - nsigma_low*q_width), np.max(q + nsigma_high*q_width) return linear_extrapolation(q, q_min, q_max)
• ## sasmodels/kernelpy.py

 r108e70e self.info = model_info self.dtype = np.dtype('d') logger.info("load python model " + self.info.name) logger.info("make python model " + self.info.name) def make_kernel(self, q_vectors):
• ## sasmodels/model_test.py

 r012cd34 import sys import unittest import traceback try: # pylint: enable=unused-import def make_suite(loaders, models): # type: (List[str], List[str]) -> unittest.TestSuite *models* is the list of models to test, or *["all"]* to test all models. """ ModelTestCase = _hide_model_case_from_nose() suite = unittest.TestSuite() skip = [] for model_name in models: if model_name in skip: continue model_info = load_model_info(model_name) #print('------') #print('found tests in', model_name) #print('------') # if ispy then use the dll loader to call pykernel # don't try to call cl kernel since it will not be # available in some environmentes. is_py = callable(model_info.Iq) # Some OpenCL drivers seem to be flaky, and are not producing the # expected result.  Since we don't have known test values yet for # all of our models, we are instead going to compare the results # for the 'smoke test' (that is, evaluation at q=0.1 for the default # parameters just to see that the model runs to completion) between # the OpenCL and the DLL.  To do this, we define a 'stash' which is # shared between OpenCL and DLL tests.  This is just a list.  If the # list is empty (which it will be when DLL runs, if the DLL runs # first), then the results are appended to the list.  If the list # is not empty (which it will be when OpenCL runs second), the results # are compared to the results stored in the first element of the list. # This is a horrible stateful hack which only makes sense because the # test suite is thrown away after being run once. stash = [] if is_py:  # kernel implemented in python test_name = "%s-python"%model_name test_method_name = "test_%s_python" % model_info.id if model_name not in skip: model_info = load_model_info(model_name) _add_model_to_suite(loaders, suite, model_info) return suite def _add_model_to_suite(loaders, suite, model_info): ModelTestCase = _hide_model_case_from_nose() #print('------') #print('found tests in', model_name) #print('------') # if ispy then use the dll loader to call pykernel # don't try to call cl kernel since it will not be # available in some environmentes. is_py = callable(model_info.Iq) # Some OpenCL drivers seem to be flaky, and are not producing the # expected result.  Since we don't have known test values yet for # all of our models, we are instead going to compare the results # for the 'smoke test' (that is, evaluation at q=0.1 for the default # parameters just to see that the model runs to completion) between # the OpenCL and the DLL.  To do this, we define a 'stash' which is # shared between OpenCL and DLL tests.  This is just a list.  If the # list is empty (which it will be when DLL runs, if the DLL runs # first), then the results are appended to the list.  If the list # is not empty (which it will be when OpenCL runs second), the results # are compared to the results stored in the first element of the list. # This is a horrible stateful hack which only makes sense because the # test suite is thrown away after being run once. stash = [] if is_py:  # kernel implemented in python test_name = "%s-python"%model_info.name test_method_name = "test_%s_python" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll",  # so that dtype="double", stash=stash) suite.addTest(test) else:   # kernel implemented in C # test using dll if desired if 'dll' in loaders or not use_opencl(): test_name = "%s-dll"%model_info.name test_method_name = "test_%s_dll" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll",  # so that dtype="double", stash=stash) test_method_name, platform="dll", dtype="double", stash=stash) suite.addTest(test) else:   # kernel implemented in C # test using dll if desired if 'dll' in loaders or not use_opencl(): test_name = "%s-dll"%model_name test_method_name = "test_%s_dll" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll", dtype="double", stash=stash) suite.addTest(test) # test using opencl if desired and available if 'opencl' in loaders and use_opencl(): test_name = "%s-opencl"%model_name test_method_name = "test_%s_opencl" % model_info.id # Using dtype=None so that the models that are only # correct for double precision are not tested using # single precision.  The choice is determined by the # presence of *single=False* in the model file. test = ModelTestCase(test_name, model_info, test_method_name, platform="ocl", dtype=None, stash=stash) #print("defining", test_name) suite.addTest(test) return suite # test using opencl if desired and available if 'opencl' in loaders and use_opencl(): test_name = "%s-opencl"%model_info.name test_method_name = "test_%s_opencl" % model_info.id # Using dtype=None so that the models that are only # correct for double precision are not tested using # single precision.  The choice is determined by the # presence of *single=False* in the model file. test = ModelTestCase(test_name, model_info, test_method_name, platform="ocl", dtype=None, stash=stash) #print("defining", test_name) suite.addTest(test) def _hide_model_case_from_nose(): return abs(target-actual)/shift < 1.5*10**-digits def run_one(model): # type: (str) -> str """ Run the tests for a single model, printing the results to stdout. *model* can by a python file, which is handy for checking user defined plugin models. # CRUFT: old interface; should be deprecated and removed def run_one(model_name): # msg = "use check_model(model_info) rather than run_one(model_name)" # warnings.warn(msg, category=DeprecationWarning, stacklevel=2) try: model_info = load_model_info(model_name) except Exception: output = traceback.format_exc() return output success, output = check_model(model_info) return output def check_model(model_info): # type: (ModelInfo) -> str """ Run the tests for a single model, capturing the output. Returns success status and the output string. """ # Note that running main() directly did not work from within the # Build a test suite containing just the model loaders = ['opencl'] if use_opencl() else ['dll'] models = [model] try: suite = make_suite(loaders, models) except Exception: import traceback stream.writeln(traceback.format_exc()) return suite = unittest.TestSuite() _add_model_to_suite(loaders, suite, model_info) # Warn if there are no user defined tests. for test in suite: if not test.info.tests: stream.writeln("Note: %s has no user defined tests."%model) stream.writeln("Note: %s has no user defined tests."%model_info.name) break else: output = stream.getvalue() stream.close() return output return result.wasSuccessful(), output
• ## sasmodels/sasview_model.py

 rd533590 return value, [value], [1.0] @classmethod def runTests(cls): """ Run any tests built into the model and captures the test output. Returns success flag and output """ from .model_test import check_model return check_model(cls._model_info) def test_cylinder(): # type: () -> float
Note: See TracChangeset for help on using the changeset viewer.