# Changeset 298d2d4 in sasmodels

Ignore:
Timestamp:
Mar 26, 2019 11:26:07 AM (8 weeks ago)
Branches:
master
Children:
d522352
Parents:
c9d052d (diff), 598a354 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into log_sesans

Files:
1 deleted
167 edited

Unmodified
Removed
• ## .travis.yml

 r335271e env: - PY=2.7 - DEPLOY=True #- DEPLOY=True - os: linux env: - PY=3.6 - PY=3.7 - os: osx language: generic language: generic env: - PY=3.5 - PY=3.7 branches: only:

• ## doc/guide/magnetism/magnetism.rst

 rbefe905 ===========   ================================================================ M0:sld       $D_M M_0$ mtheta:sld   $\theta_M$ mphi:sld     $\phi_M$ up:angle     $\theta_\mathrm{up}$ up:frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up:frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample sld_M0       $D_M M_0$ sld_mtheta   $\theta_M$ sld_mphi     $\phi_M$ up_frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up_frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample up_angle     $\theta_\mathrm{up}$ ===========   ================================================================ .. note:: The values of the 'up:frac_i' and 'up:frac_f' must be in the range 0 to 1. The values of the 'up_frac_i' and 'up_frac_f' must be in the range 0 to 1. *Document History*
• ## doc/guide/pd/polydispersity.rst

 r29afc50 .. _polydispersityhelp: Polydispersity Distributions ---------------------------- With some models in sasmodels we can calculate the average intensity for a population of particles that exhibit size and/or orientational polydispersity. The resultant intensity is normalized by the average particle volume such that Polydispersity & Orientational Distributions -------------------------------------------- For some models we can calculate the average intensity for a population of particles that possess size and/or orientational (ie, angular) distributions. In SasView we call the former *polydispersity* but use the parameter *PD* to parameterise both. In other words, the meaning of *PD* in a model depends on the actual parameter it is being applied too. The resultant intensity is then normalized by the average particle volume such that .. math:: P(q) = \text{scale} \langle F^* F \rangle / V + \text{background} where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an average over the size distribution. where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an average over the distribution $f(x; \bar x, \sigma)$, giving .. math:: P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} f(x; \bar x, \sigma) F^2(q, x)\, dx + \text{background} Each distribution is characterized by a center value $\bar x$ or $x_\text{med}$, a width parameter $\sigma$ (note this is *not necessarily* the standard deviation, so read the description carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The meaning of a polydispersity parameter *PD* (not to be confused with a molecular weight distributions in polymer science) in a model depends on the type of parameter it is being applied too. the standard deviation, so read the description of the distribution carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The distribution width applied to *volume* (ie, shape-describing) parameters is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. However, the distribution width applied to *orientation* (ie, angle-describing) parameters is just $\sigma = \mathrm{PD}$. However, the distribution width applied to *orientation* parameters is just $\sigma = \mathrm{PD}$. $N_\sigma$ determines how far into the tails to evaluate the distribution, with larger values of $N_\sigma$ required for heavier tailed distributions. The scattering in general falls rapidly with $qr$ so the usual assumption that $G(r - 3\sigma_r)$ is tiny and therefore $f(r - 3\sigma_r)G(r - 3\sigma_r)$ that $f(r - 3\sigma_r)$ is tiny and therefore $f(r - 3\sigma_r)f(r - 3\sigma_r)$ will not contribute much to the average may not hold when particles are large. This, too, will require increasing $N_\sigma$. Users should note that the averaging computation is very intensive. Applying polydispersion to multiple parameters at the same time or increasing the number of points in the distribution will require patience! However, the calculations are generally more robust with more data points or more angles. polydispersion and/or orientational distributions to multiple parameters at the same time, or increasing the number of points in the distribution, will require patience! However, the calculations are generally more robust with more data points or more angles. The following distribution functions are provided: Additional distributions are under consideration. **Beware: when the Polydispersity & Orientational Distribution panel in SasView is** **first opened, the default distribution for all parameters is the Gaussian Distribution.** **This may not be suitable. See Suggested Applications below.** .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace the term 'polydispersity' (see Pure Appl. Chem., (2009), 81(2), 351-353 _ in order to make the terminology describing distributions of chemical properties unambiguous. However, these terms are unrelated to the proportional size distributions and orientational distributions used in SasView models. Suggested Applications ^^^^^^^^^^^^^^^^^^^^^^ If applying polydispersion to parameters describing particle sizes, use If applying polydispersion to parameters describing particle sizes, consider using the Lognormal or Schulz distributions. If applying polydispersion to parameters describing interfacial thicknesses or angular orientations, use the Gaussian or Boltzmann distributions. or angular orientations, consider using the Gaussian or Boltzmann distributions. If applying polydispersion to parameters describing angles, use the Uniform ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Many commercial Dynamic Light Scattering (DLS) instruments produce a size polydispersity parameter, sometimes even given the symbol $p$\ ! This parameter is defined as the relative standard deviation coefficient of variation of the size distribution and is NOT the same as the polydispersity parameters in the Lognormal and Schulz distributions above (though they all related) except when the DLS polydispersity parameter is <0.13. .. math:: p_{DLS} = \sqrt(\nu / \bar x^2) where $\nu$ is the variance of the distribution and $\bar x$ is the mean value of $x$. Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and it should not be assumed that any of the following can be simply equated with the polydispersity *PD* parameter used in SasView. The dimensionless **Polydispersity Index (PI)** is a measure of the width of the distribution of autocorrelation function decay rates (*not* the distribution of particle sizes itself, though the two are inversely related) and is defined by ISO 22412:2017 as .. math:: PI = \mu_{2} / \bar \Gamma^2 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the intensity-weighted average value, of the distribution of decay rates. *If the distribution of decay rates is Gaussian* then .. math:: PI = \sigma^2 / 2\bar \Gamma^2 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** to be defined as .. math:: RP = \sigma / \bar \Gamma = \sqrt{2 \cdot PI} PI values smaller than 0.05 indicate a highly monodisperse system. Values greater than 0.7 indicate significant polydispersity. The **size polydispersity P-parameter** is defined as the relative standard deviation coefficient of variation .. math:: P = \sqrt\nu / \bar R where $\nu$ is the variance of the distribution and $\bar R$ is the mean value of $R$. Here, the product $P \bar R$ is *equal* to the standard deviation of the Lognormal distribution. P values smaller than 0.13 indicate a monodisperse system. For more information see: S King, C Washington & R Heenan, *Phys Chem Chem Phys*, (2005), 7, 143 ISO 22412:2017, International Standards Organisation (2017) _. Polydispersity: What does it mean for DLS and Chromatography _. Dynamic Light Scattering: Common Terms Defined, Whitepaper WP111214. Malvern Instruments (2011) _. S King, C Washington & R Heenan, *Phys Chem Chem Phys*, (2005), 7, 143. T Allen, in *Particle Size Measurement*, 4th Edition, Chapman & Hall, London (1990). .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ | 2018-03-20 Steve King | 2018-04-04 Steve King | 2018-08-09 Steve King
• ## doc/guide/plugin.rst

 r7e6bc45e Form_Factors_ for more details. **model_info = ...** lets you define a model directly, for example, by loading and modifying existing models.  This is done implicitly by :func:sasmodels.core.load_model_info, which can create a mixture model from a pair of existing models.  For example:: from sasmodels.core import load_model_info model_info = load_model_info('sphere+cylinder') See :class:sasmodels.modelinfo.ModelInfo for details about the model attributes that are defined. Model Parameters ................ **Note: The order of the parameters in the definition will be the order of the parameters in the user interface and the order of the parameters in Iq(), Iqac(), Iqabc() and form_volume(). And** *scale* **and** *background* **parameters are implicit to all models, so they do not need to be included in the parameter table.** parameters in the user interface and the order of the parameters in Fq(), Iq(), Iqac(), Iqabc(), form_volume() and shell_volume(). And** *scale* **and** *background* **parameters are implicit to all models, so they do not need to be included in the parameter table.** - **"name"** is the name of the parameter shown on the FitPage. scattered intensity. - "volume" parameters are passed to Iq(), Iqac(), Iqabc() and form_volume(), and have polydispersity loops generated automatically. - "volume" parameters are passed to Fq(), Iq(), Iqac(), Iqabc(), form_volume() and shell_volume(), and have polydispersity loops generated automatically. - "orientation" parameters are not passed, but instead are combined with appropriately smeared pattern. Each .py file also contains a function:: def random(): ... This function provides a model-specific random parameter set which shows model features in the USANS to SANS range.  For example, core-shell sphere sets the outer radius of the sphere logarithmically in [20, 20,000], which sets the Q value for the transition from flat to falling.  It then uses a beta distribution to set the percentage of the shape which is shell, giving a preference for very thin or very thick shells (but never 0% or 100%).  Using -sets=10 in sascomp should show a reasonable variety of curves over the default sascomp q range. The parameter set is returned as a dictionary of {parameter: value, ...}. Any model parameters not included in the dictionary will default according to the code in the _randomize_one() function from sasmodels/compare.py. Python Models ............. used. Hollow shapes, where the volume fraction of particle corresponds to the material in the shell rather than the volume enclosed by the shape, must also define a *shell_volume(par1, par2, ...)* function.  The parameters are the same as for *form_volume*.  The *I(q)* calculation should use *shell_volume* squared as its scale factor for the volume normalization. The structure factor calculation needs *form_volume* in order to properly scale the volume fraction parameter, so both functions are required for hollow shapes. **Note: Pure python models do not yet support direct computation of the** **average of $F(q)$ and $F^2(q)$. Neither do they support orientational** **distributions or magnetism (use C models if these are required).** Embedded C Models ................. This expands into the equivalent C code:: #include double Iq(double q, double par1, double par2, ...); double Iq(double q, double par1, double par2, ...) *form_volume* defines the volume of the shape. As in python models, it includes only the volume parameters. *form_volume* defines the volume of the shell for hollow shapes. As in python models, it includes only the volume parameters. **source=['fn.c', ...]** includes the listed C source files in the listed in *source*. Structure Factors ................. Structure factor calculations may need the underlying $$and$$ rather than $I(q)$.  This is used to compute $\beta = ^2/$ in the decoupling approximation to the structure factor. Instead of defining the *Iq* function, models can define *Fq* as something like:: double Fq(double q, double *F1, double *F2, double par1, double par2, ...); double Fq(double q, double *F1, double *F2, double par1, double par2, ...) { // Polar integration loop over all orientations. ... *F1 = 1e-2 * total_F1 * contrast * volume; *F2 = 1e-4 * total_F2 * square(contrast * volume); return I(q, par1, par2, ...); } If the volume fraction scale factor is built into the model (as occurs for the vesicle model, for example), then scale *F1* by $\surd V_f$ so that $\beta$ is computed correctly. Structure factor calculations are not yet supported for oriented shapes. Note: only available as a separate C file listed in *source*, or within a *c_code* block within the python model definition file. Oriented Shapes ............... laboratory frame and beam travelling along $-z$. The oriented C model is called using *Iqabc(qa, qb, qc, par1, par2, ...)* where The oriented C model (oriented pure Python models are not supported) is called using *Iqabc(qa, qb, qc, par1, par2, ...)* where *par1*, etc. are the parameters to the model.  If the shape is rotationally symmetric about *c* then *psi* is not needed, and the model is called to compute the proper magnetism and orientation, which you can implement using *Iqxy(qx, qy, par1, par2, ...)*. **Note: Magnetism is not supported in pure Python models.** Special Functions erf, erfc, tgamma, lgamma:  **do not use** Special functions that should be part of the standard, but are missing or inaccurate on some platforms. Use sas_erf, sas_erfc and sas_gamma instead (see below). Note: lgamma(x) has not yet been tested. or inaccurate on some platforms. Use sas_erf, sas_erfc, sas_gamma and sas_lgamma instead (see below). Some non-standard constants and functions are also provided: Gamma function sas_gamma\ $(x) = \Gamma(x)$. The standard math function, tgamma(x) is unstable for $x < 1$ The standard math function, tgamma(x), is unstable for $x < 1$ on some platforms. :code:source = ["lib/sas_gamma.c", ...] (sas_gamma.c _) sas_gammaln(x): log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. The standard math function, lgamma(x), is incorrect for single precision on some platforms. :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_gammainc(a, x), sas_gammaincc(a, x): Incomplete gamma function sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ and complementary incomplete gamma function sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_erf(x), sas_erfc(x): If $n$ = 0 or 1, it uses sas_J0($x$) or sas_J1($x$), respectively. Warning: JN(n,x) can be very inaccurate (0.1%) for x not in [0.1, 100]. The standard math function jn(n, x) is not available on all platforms. Sine integral Si\ $(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. Warning: Si(x) can be very inaccurate (0.1%) for x in [0.1, 100]. This function uses Taylor series for small and large arguments: For large arguments, For large arguments use the following Taylor series, .. math:: :code:source = ["lib/Si.c", ...] (Si.c _) (Si.c _) sas_3j1x_x(x): "radius": 120., "radius_pd": 0.2, "radius_pd_n":45}, 0.2, 0.228843], [{"radius": 120., "radius_pd": 0.2, "radius_pd_n":45}, "ER", 120.], [{"radius": 120., "radius_pd": 0.2, "radius_pd_n":45}, "VR", 1.], [{"radius": 120., "radius_pd": 0.2, "radius_pd_n":45}, 0.1, None, None, 120., None, 1.],  # q, F, F^2, R_eff, V, form:shell [{"@S": "hardsphere"}, 0.1, None], ] **tests=[[{parameters}, q, result], ...]** is a list of lists. **tests=[[{parameters}, q, Iq], ...]** is a list of lists. Each list is one test and contains, in order: - input and output values can themselves be lists if you have several $q$ values to test for the same model parameters. - for testing *ER* and *VR*, give the inputs as "ER" and "VR" respectively; the output for *VR* should be the sphere/shell ratio, not the individual sphere and shell values. - for testing effective radius, volume and form:shell volume ratio, use the extended form of the tests results, with *None, None, R_eff, V, V_r* instead of *Iq*.  This calls the kernel *Fq* function instead of *Iq*. - for testing F and F^2 (used for beta approximation) do the same as the effective radius test, but include values for the first two elements, $$and$$. - for testing interaction between form factor and structure factor, specify the structure factor name in the parameters as *{"@S": "name", ...}* with the remaining list of parameters defined by the *P@S* product model. .. _Test_Your_New_Model: and a check that the model runs. If you are not using sasmodels from SasView, skip this step. Recommended Testing ................... **NB: For now, this more detailed testing is only possible if you have a SasView build environment available!** If the model compiles and runs, you can next run the unit tests that
• ## doc/guide/scripting.rst

 r4aa5dce The key functions are :func:sasmodels.core.load_model for loading the model definition and compiling the kernel and :func:sasmodels.data.load_data for calling sasview to load the data. Need the data because that defines the resolution function and the q values to evaluate. If there is no data, then use :func:sasmodels.data.empty_data1D or :func:sasmodels.data.empty_data2D to create some data with a given $q$. Using sasmodels through bumps ============================= With the data and the model, you can wrap it in a *bumps* model with :func:sasmodels.data.load_data for calling sasview to load the data. Preparing data ============== Usually you will load data via the sasview loader, with the :func:sasmodels.data.load_data function.  For example:: from sasmodels.data import load_data data = load_data("sasmodels/example/093191_201.dat") You may want to apply a data mask, such a beam stop, and trim high $q$:: from sasmodels.data import set_beam_stop set_beam_stop(data, qmin, qmax) The :func:sasmodels.data.set_beam_stop method simply sets the *mask* attribute for the data. The data defines the resolution function and the q values to evaluate, so even if you simulating experiments prior to making measurements, you still need a data object for reference. Use :func:sasmodels.data.empty_data1D or :func:sasmodels.data.empty_data2D to create a container with a given $q$ and $\Delta q/q$.  For example:: import numpy as np from sasmodels.data import empty_data1D # 120 points logarithmically spaced from 0.005 to 0.2, with dq/q = 5% q = np.logspace(np.log10(5e-3), np.log10(2e-1), 120) data = empty_data1D(q, resolution=0.05) To use a more realistic model of resolution, or to load data from a file format not understood by SasView, you can use :class:sasmodels.data.Data1D or :class:sasmodels.data.Data2D directly.  The 1D data uses *x*, *y*, *dx* and *dy* for $x = q$ and $y = I(q)$, and 2D data uses *x*, *y*, *z*, *dx*, *dy*, *dz* for $x, y = qx, qy$ and $z = I(qx, qy)$. [Note: internally, the Data2D object uses SasView conventions, *qx_data*, *qy_data*, *data*, *dqx_data*, *dqy_data*, and *err_data*.] For USANS data, use 1D data, but set *dxl* and *dxw* attributes to indicate slit resolution:: data.dxl = 0.117 See :func:sasmodels.resolution.slit_resolution for details. SESANS data is more complicated; if your SESANS format is not supported by SasView you need to define a number of attributes beyond *x*, *y*.  For example:: SElength = np.linspace(0, 2400, 61) # [A] data = np.ones_like(SElength) err_data = np.ones_like(SElength)*0.03 class Source: wavelength = 6 # [A] wavelength_unit = "A" class Sample: zacceptance = 0.1 # [A^-1] thickness = 0.2 # [cm] class SESANSData1D: #q_zmax = 0.23 # [A^-1] lam = 0.2 # [nm] x = SElength y = data dy = err_data sample = Sample() data = SESANSData1D() x, y = ... # create or load sesans data = smd.Data The *data* module defines various data plotters as well. Using sasmodels directly ======================== Once you have a computational kernel and a data object, you can evaluate the model for various parameters using :class:sasmodels.direct_model.DirectModel.  The resulting object *f* will be callable as *f(par=value, ...)*, returning the $I(q)$ for the $q$ values in the data.  For example:: import numpy as np from sasmodels.data import empty_data1D from sasmodels.core import load_model from sasmodels.direct_model import DirectModel # 120 points logarithmically spaced from 0.005 to 0.2, with dq/q = 5% q = np.logspace(np.log10(5e-3), np.log10(2e-1), 120) data = empty_data1D(q, resolution=0.05) kernel = load_model("ellipsoid) f = DirectModel(data, kernel) Iq = f(radius_polar=100) Polydispersity information is set with special parameter names: * *par_pd* for polydispersity width, $\Delta p/p$, * *par_pd_n* for the number of points in the distribution, * *par_pd_type* for the distribution type (as a string), and * *par_pd_nsigmas* for the limits of the distribution. Using sasmodels through the bumps optimizer =========================================== Like DirectModel, you can wrap data and a kernel in a *bumps* model with class:sasmodels.bumps_model.Model and create an class:sasmodels.bump_model.Experiment that you can fit with the *bumps* class:sasmodels.bumps_model.Experiment that you can fit with the *bumps* interface. Here is an example from the *example* directory such as *example/model.py*:: SasViewCom bumps.cli example/model.py --preview Using sasmodels directly ======================== Bumps has a notion of parameter boxes in which you can set and retrieve values.  Instead of using bumps, you can create a directly callable function with :class:sasmodels.direct_model.DirectModel.  The resulting object *f* will be callable as *f(par=value, ...)*, returning the $I(q)$ for the $q$ values in the data.  Polydisperse parameters use the same naming conventions as in the bumps model, with e.g., radius_pd being the polydispersity associated with radius. Calling the computation kernel ============================== Getting a simple function that you can call on a set of q values and return python kernel.  Once the kernel is in hand, we can then marshal a set of parameters into a :class:sasmodels.details.CallDetails object and ship it to the kernel using the :func:sansmodels.direct_model.call_kernel function.  An example should help, *example/cylinder_eval.py*:: from numpy import logspace the kernel using the :func:sansmodels.direct_model.call_kernel function.  To accesses the underlying $$and$$, use :func:sasmodels.direct_model.call_Fq instead. The following example should help, *example/cylinder_eval.py*:: from numpy import logspace, sqrt from matplotlib import pyplot as plt from sasmodels.core import load_model from sasmodels.direct_model import call_kernel from sasmodels.direct_model import call_kernel, call_Fq model = load_model('cylinder') q = logspace(-3, -1, 200) kernel = model.make_kernel([q]) Iq = call_kernel(kernel, dict(radius=200.)) plt.loglog(q, Iq) pars = {'radius': 200, 'radius_pd': 0.1, 'scale': 2} Iq = call_kernel(kernel, pars) F, Fsq, Reff, V, Vratio = call_Fq(kernel, pars) plt.loglog(q, Iq, label='2 I(q)') plt.loglog(q, F**2/V, label='^2/V') plt.loglog(q, Fsq/V, label='/V') plt.xlabel('q (1/A)') plt.ylabel('I(q) (1/cm)') plt.title('Cylinder with radius 200.') plt.legend() plt.show() On windows, this can be called from the cmd prompt using sasview as:: .. figure:: direct_call.png Comparison between $I(q)$, $$and$$ for cylinder model. This compares $I(q)$ with $$and$$ for a cylinder with *radius=200 +/- 20* and *scale=2*. Note that *call_Fq* does not include scale and background, nor does it normalize by the average volume. The definition of $F = \rho V \hat F$ scaled by the contrast and volume, compared to the canonical cylinder $\hat F$, with $\hat F(0) = 1$. Integrating over polydispersity and orientation, the returned values are $\sum_{r,w\in N(r_o, r_o/10)} \sum_\theta w F(q,r_o,\theta)\sin\theta$ and $\sum_{r,w\in N(r_o, r_o/10)} \sum_\theta w F^2(q,r_o,\theta)\sin\theta$. On windows, this example can be called from the cmd prompt using sasview as as the python interpreter:: SasViewCom example/cylinder_eval.py
• ## doc/rst_prolog

 r30b60d2 .. |Ang^-3| replace:: |Ang|\ :sup:-3 .. |Ang^-4| replace:: |Ang|\ :sup:-4 .. |nm^-1| replace:: nm\ :sup:-1 .. |cm^-1| replace:: cm\ :sup:-1 .. |cm^2| replace:: cm\ :sup:2
• ## example/cylinder_eval.py

 r2e66ef5 """ from numpy import logspace from numpy import logspace, sqrt from matplotlib import pyplot as plt from sasmodels.core import load_model from sasmodels.direct_model import call_kernel from sasmodels.direct_model import call_kernel, call_Fq model = load_model('cylinder') q = logspace(-3, -1, 200) kernel = model.make_kernel([q]) Iq = call_kernel(kernel, dict(radius=200.)) plt.loglog(q, Iq) pars = {'radius': 200, 'radius_pd': 0.1, 'scale': 2} Iq = call_kernel(kernel, pars) F, Fsq, Reff, V, Vratio = call_Fq(kernel, pars) plt.loglog(q, Iq, label='2 I(q)') plt.loglog(q, F**2/V, label='^2/V') plt.loglog(q, Fsq/V, label='/V') plt.xlabel('q (1/A)') plt.ylabel('I(q)') plt.ylabel('I(q) (1/cm)') plt.title('Cylinder with radius 200.') plt.legend() plt.show()
• ## example/model_ellipsoid_hayter_msa.py

 r8a5f021 # DEFINE THE MODEL kernel = load_model('ellipsoid*hayter_msa') kernel = load_model('ellipsoid@hayter_msa') pars = dict(scale=6.4, background=0.06, sld=0.33, sld_solvent=2.15, radius_polar=14.0,
• ## example/multiscatfit.py

 r49d1f8b8 # Show the model without fitting PYTHONPATH=..:../explore:../../bumps:../../sasview/src python multiscatfit.py PYTHONPATH=..:../../bumps:../../sasview/src python multiscatfit.py # Run the fit PYTHONPATH=..:../explore:../../bumps:../../sasview/src ../../bumps/run.py \ PYTHONPATH=..:../../bumps:../../sasview/src ../../bumps/run.py \ multiscatfit.py --store=/tmp/t1 ) # Tie the model to the data M = Experiment(data=data, model=model) # Stack mulitple scattering on top of the existing resolution function. M.resolution = MultipleScattering(resolution=M.resolution, probability=0.) # SET THE FITTING PARAMETERS model.radius_polar.range(15, 3000) model.scale.range(0, 0.1) # Mulitple scattering probability parameter # HACK: the probability is stuffed in as an extra parameter to the experiment. probability = Parameter(name="probability", value=0.0) probability.range(0.0, 0.9) # The multiple scattering probability parameter is in the resolution function # instead of the scattering function, so access it through M.resolution M.scattering_probability.range(0.0, 0.9) M = Experiment(data=data, model=model, extra_pars={'probability': probability}) # Stack mulitple scattering on top of the existing resolution function. # Because resolution functions in sasview don't have fitting parameters, # we instead allow the multiple scattering calculator to take a function # instead of a probability.  This function returns the current value of # the parameter. ** THIS IS TEMPORARY ** when multiple scattering is # properly integrated into sasmodels and sasview, its fittable parameter # will be treated like the model parameters. M.resolution = MultipleScattering(resolution=M.resolution, probability=lambda: probability.value, ) M._kernel_inputs = M.resolution.q_calc # Let bumps know that we are fitting this experiment problem = FitProblem(M)