- Timestamp:
- Mar 31, 2019 10:20:32 AM (6 years ago)
- Branches:
- master
- Children:
- d827c5e
- Parents:
- 7050455 (diff), e2da671 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent. - git-author:
- Andrew Jackson <andrew.jackson@…> (03/31/19 10:20:32)
- git-committer:
- GitHub <noreply@…> (03/31/19 10:20:32)
- Location:
- doc/guide
- Files:
-
- 1 added
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/guide/pd/polydispersity.rst
rd089a00 ra5cb9bc 11 11 -------------------------------------------- 12 12 13 For some models we can calculate the average intensity for a population of 14 particles that possess size and/or orientational (ie, angular) distributions. 15 In SasView we call the former *polydispersity* but use the parameter *PD* to 16 parameterise both. In other words, the meaning of *PD* in a model depends on 13 For some models we can calculate the average intensity for a population of 14 particles that possess size and/or orientational (ie, angular) distributions. 15 In SasView we call the former *polydispersity* but use the parameter *PD* to 16 parameterise both. In other words, the meaning of *PD* in a model depends on 17 17 the actual parameter it is being applied too. 18 18 19 The resultant intensity is then normalized by the average particle volume such 19 The resultant intensity is then normalized by the average particle volume such 20 20 that 21 21 … … 24 24 P(q) = \text{scale} \langle F^* F \rangle / V + \text{background} 25 25 26 where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an 26 where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an 27 27 average over the distribution $f(x; \bar x, \sigma)$, giving 28 28 29 29 .. math:: 30 30 31 P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} 31 P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} 32 32 f(x; \bar x, \sigma) F^2(q, x)\, dx + \text{background} 33 33 34 34 Each distribution is characterized by a center value $\bar x$ or 35 35 $x_\text{med}$, a width parameter $\sigma$ (note this is *not necessarily* 36 the standard deviation, so read the description of the distribution carefully), 37 the number of sigmas $N_\sigma$ to include from the tails of the distribution, 38 and the number of points used to compute the average. The center of the 39 distribution is set by the value of the model parameter. 40 41 The distribution width applied to *volume* (ie, shape-describing) parameters 42 is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. 43 However, the distribution width applied to *orientation* parameters is just 44 $\sigma = \mathrm{PD}$. 36 the standard deviation, so read the description carefully), the number of 37 sigmas $N_\sigma$ to include from the tails of the distribution, and the 38 number of points used to compute the average. The center of the distribution 39 is set by the value of the model parameter. The meaning of a polydispersity 40 parameter *PD* (not to be confused with a molecular weight distributions 41 in polymer science) in a model depends on the type of parameter it is being 42 applied too. 43 44 The distribution width applied to *volume* (ie, shape-describing) parameters 45 is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. 46 However, the distribution width applied to *orientation* (ie, angle-describing) 47 parameters is just $\sigma = \mathrm{PD}$. 45 48 46 49 $N_\sigma$ determines how far into the tails to evaluate the distribution, … … 52 55 53 56 Users should note that the averaging computation is very intensive. Applying 54 polydispersion and/or orientational distributions to multiple parameters at 55 the same time, or increasing the number of points in the distribution, will 56 require patience! However, the calculations are generally more robust with 57 polydispersion and/or orientational distributions to multiple parameters at 58 the same time, or increasing the number of points in the distribution, will 59 require patience! However, the calculations are generally more robust with 57 60 more data points or more angles. 58 61 … … 66 69 * *Schulz Distribution* 67 70 * *Array Distribution* 71 * *User-defined Distributions* 68 72 69 73 These are all implemented as *number-average* distributions. 70 74 71 Additional distributions are under consideration.72 75 73 76 **Beware: when the Polydispersity & Orientational Distribution panel in SasView is** … … 75 78 **This may not be suitable. See Suggested Applications below.** 76 79 77 .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace 78 the term 'polydispersity' (see `Pure Appl. Chem., (2009), 81(2), 79 351-353 <http://media.iupac.org/publications/pac/2009/pdf/8102x0351.pdf>`_ 80 in order to make the terminology describing distributions of chemical 81 properties unambiguous. However, these terms are unrelated to the 82 proportional size distributions and orientational distributions used in 80 .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace 81 the term 'polydispersity' (see `Pure Appl. Chem., (2009), 81(2), 82 351-353 <http://media.iupac.org/publications/pac/2009/pdf/8102x0351.pdf>`_ 83 in order to make the terminology describing distributions of chemical 84 properties unambiguous. However, these terms are unrelated to the 85 proportional size distributions and orientational distributions used in 83 86 SasView models. 84 87 … … 92 95 or angular orientations, consider using the Gaussian or Boltzmann distributions. 93 96 94 If applying polydispersion to parameters describing angles, use the Uniform 95 distribution. Beware of using distributions that are always positive (eg, the 97 If applying polydispersion to parameters describing angles, use the Uniform 98 distribution. Beware of using distributions that are always positive (eg, the 96 99 Lognormal) because angles can be negative! 97 100 98 The array distribution allows a user-defined distribution to be applied. 101 The array distribution provides a very simple means of implementing a user- 102 defined distribution, but without any fittable parameters. Greater flexibility 103 is conferred by the user-defined distribution. 99 104 100 105 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ … … 334 339 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 335 340 341 User-defined Distributions 342 ^^^^^^^^^^^^^^^^^^^^^^^^^^ 343 344 You can also define your own distribution by creating a python file defining a 345 *Distribution* object with a *_weights* method. The *_weights* method takes 346 *center*, *sigma*, *lb* and *ub* as arguments, and can access *self.npts* 347 and *self.nsigmas* from the distribution. They are interpreted as follows: 348 349 * *center* the value of the shape parameter (for size dispersity) or zero 350 if it is an angular dispersity. This parameter may be fitted. 351 352 * *sigma* the width of the distribution, which is the polydispersity parameter 353 times the center for size dispersity, or the polydispersity parameter alone 354 for angular dispersity. This parameter may be fitted. 355 356 * *lb*, *ub* are the parameter limits (lower & upper bounds) given in the model 357 definition file. For example, a radius parameter has *lb* equal to zero. A 358 volume fraction parameter would have *lb* equal to zero and *ub* equal to one. 359 360 * *self.nsigmas* the distance to go into the tails when evaluating the 361 distribution. For a two parameter distribution, this value could be 362 co-opted to use for the second parameter, though it will not be available 363 for fitting. 364 365 * *self.npts* the number of points to use when evaluating the distribution. 366 The user will adjust this to trade calculation time for accuracy, but the 367 distribution code is free to return more or fewer, or use it for the third 368 parameter in a three parameter distribution. 369 370 As an example, the code following wraps the Laplace distribution from scipy stats:: 371 372 import numpy as np 373 from scipy.stats import laplace 374 375 from sasmodels import weights 376 377 class Dispersion(weights.Dispersion): 378 r""" 379 Laplace distribution 380 381 .. math:: 382 383 w(x) = e^{-\sigma |x - \mu|} 384 """ 385 type = "laplace" 386 default = dict(npts=35, width=0, nsigmas=3) # default values 387 def _weights(self, center, sigma, lb, ub): 388 x = self._linspace(center, sigma, lb, ub) 389 wx = laplace.pdf(x, center, sigma) 390 return x, wx 391 392 You can plot the weights for a given value and width using the following:: 393 394 from numpy import inf 395 from matplotlib import pyplot as plt 396 from sasmodels import weights 397 398 # reload the user-defined weights 399 weights.load_weights() 400 x, wx = weights.get_weights('laplace', n=35, width=0.1, nsigmas=3, value=50, 401 limits=[0, inf], relative=True) 402 403 # plot the weights 404 plt.interactive(True) 405 plt.plot(x, wx, 'x') 406 407 The *self.nsigmas* and *self.npts* parameters are normally used to control 408 the accuracy of the distribution integral. The *self._linspace* function 409 uses them to define the *x* values (along with the *center*, *sigma*, 410 *lb*, and *ub* which are passed as parameters). If you repurpose npts or 411 nsigmas you will need to generate your own *x*. Be sure to honour the 412 limits *lb* and *ub*, for example to disallow a negative radius or constrain 413 the volume fraction to lie between zero and one. 414 415 To activate a user-defined distribution, put it in a file such as *distname.py* 416 in the *SAS_WEIGHTS_PATH* folder. This is defined with an environment 417 variable, defaulting to:: 418 419 SAS_WEIGHTS_PATH=~/.sasview/weights 420 421 The weights path is loaded on startup. To update the distribution definition 422 in a running application you will need to enter the following python commands:: 423 424 import sasmodels.weights 425 sasmodels.weights.load_weights('path/to/distname.py') 426 427 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 428 336 429 Note about DLS polydispersity 337 430 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 338 431 339 Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and 340 it should not be assumed that any of the following can be simply equated with 432 Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and 433 it should not be assumed that any of the following can be simply equated with 341 434 the polydispersity *PD* parameter used in SasView. 342 435 343 The dimensionless **Polydispersity Index (PI)** is a measure of the width of the 344 distribution of autocorrelation function decay rates (*not* the distribution of 345 particle sizes itself, though the two are inversely related) and is defined by 436 The dimensionless **Polydispersity Index (PI)** is a measure of the width of the 437 distribution of autocorrelation function decay rates (*not* the distribution of 438 particle sizes itself, though the two are inversely related) and is defined by 346 439 ISO 22412:2017 as 347 440 … … 350 443 PI = \mu_{2} / \bar \Gamma^2 351 444 352 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the 445 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the 353 446 intensity-weighted average value, of the distribution of decay rates. 354 447 … … 359 452 PI = \sigma^2 / 2\bar \Gamma^2 360 453 361 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** 454 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** 362 455 to be defined as 363 456 … … 366 459 RP = \sigma / \bar \Gamma = \sqrt{2 \cdot PI} 367 460 368 PI values smaller than 0.05 indicate a highly monodisperse system. Values 461 PI values smaller than 0.05 indicate a highly monodisperse system. Values 369 462 greater than 0.7 indicate significant polydispersity. 370 463 371 The **size polydispersity P-parameter** is defined as the relative standard 372 deviation coefficient of variation 464 The **size polydispersity P-parameter** is defined as the relative standard 465 deviation coefficient of variation 373 466 374 467 .. math:: … … 377 470 378 471 where $\nu$ is the variance of the distribution and $\bar R$ is the mean 379 value of $R$. Here, the product $P \bar R$ is *equal* to the standard 472 value of $R$. Here, the product $P \bar R$ is *equal* to the standard 380 473 deviation of the Lognormal distribution. 381 474 -
doc/guide/index.rst
rda5536f rbc69321 12 12 pd/polydispersity.rst 13 13 resolution.rst 14 plugin.rst 15 fitting_sq.rst 14 16 magnetism/magnetism.rst 15 17 orientation/orientation.rst 16 18 sesans/sans_to_sesans.rst 17 19 sesans/sesans_fitting.rst 18 plugin.rst19 20 scripting.rst 20 21 refs.rst -
doc/guide/plugin.rst
r9150036 re15a822 303 303 **Note: The order of the parameters in the definition will be the order of the 304 304 parameters in the user interface and the order of the parameters in Fq(), Iq(), 305 Iqac(), Iqabc(), form_volume() and shell_volume().305 Iqac(), Iqabc(), radius_effective(), form_volume() and shell_volume(). 306 306 And** *scale* **and** *background* **parameters are implicit to all models, 307 307 so they do not need to be included in the parameter table.** … … 387 387 can take arbitrary values, even for integer parameters, so your model should 388 388 round the incoming parameter value to the nearest integer inside your model 389 you should round to the nearest integer. In C code, you can do this using:: 389 you should round to the nearest integer. In C code, you can do this using: 390 391 .. code-block:: c 390 392 391 393 static double … … 454 456 ............. 455 457 458 .. note:: 459 460 Pure python models do not yet support direct computation of $<F(Q)^2>$ or 461 $<F(Q)>^2$. Neither do they support orientational distributions or magnetism 462 (use C models if these are required). 463 456 464 For pure python models, define the *Iq* function:: 457 465 … … 499 507 Models should define *form_volume(par1, par2, ...)* where the parameter 500 508 list includes the *volume* parameters in order. This is used for a weighted 501 volume normalization so that scattering is on an absolute scale. If 502 *form_volume* is not defined, then the default *form_volume = 1.0* will be 503 used. 509 volume normalization so that scattering is on an absolute scale. For 510 solid shapes, the *I(q)* function should use *form_volume* squared 511 as its scale factor. If *form_volume* is not defined, then the default 512 *form_volume = 1.0* will be used. 504 513 505 514 Hollow shapes, where the volume fraction of particle corresponds to the 506 515 material in the shell rather than the volume enclosed by the shape, must 507 516 also define a *shell_volume(par1, par2, ...)* function. The parameters 508 are the same as for *form_volume*. The *I(q)* calculation should use 509 *shell_volume* squared as its scale factor for the volume normalization. 510 The structure factor calculation needs *form_volume* in order to properly 511 scale the volume fraction parameter, so both functions are required for 512 hollow shapes. 513 514 Note: Pure python models do not yet support direct computation of the 515 average of $F(q)$ and $F^2(q)$. 517 are the same as for *form_volume*. Here the *I(q)* function should use 518 *shell_volume* squared instead of *form_volume* squared so that the scale 519 parameter corresponds to the volume fraction of material in the sample. 520 The structure factor calculation needs the volume fraction of the filled 521 shapes for its calculation, so the volume fraction parameter in the model 522 is automatically scaled by *form_volume/shell_volume* prior to calling the 523 structure factor. 516 524 517 525 Embedded C Models … … 524 532 """ 525 533 526 This expands into the equivalent C code:: 534 This expands into the equivalent C code: 535 536 .. code-block:: c 527 537 528 538 double Iq(double q, double par1, double par2, ...); … … 535 545 includes only the volume parameters. 536 546 537 * form_volume* defines the volume of the shell for hollow shapes. As in547 *shell_volume* defines the volume of the shell for hollow shapes. As in 538 548 python models, it includes only the volume parameters. 539 549 … … 567 577 Rather than returning NAN from Iq, you must define the *INVALID(v)*. The 568 578 *v* parameter lets you access all the parameters in the model using 569 *v.par1*, *v.par2*, etc. For example:: 579 *v.par1*, *v.par2*, etc. For example: 580 581 .. code-block:: c 570 582 571 583 #define INVALID(v) (v.bell_radius < v.radius) … … 582 594 583 595 Instead of defining the *Iq* function, models can define *Fq* as 584 something like:: 596 something like: 597 598 .. code-block:: c 585 599 586 600 double Fq(double q, double *F1, double *F2, double par1, double par2, ...); … … 614 628 laboratory frame and beam travelling along $-z$. 615 629 616 The oriented C model is called using *Iqabc(qa, qb, qc, par1, par2, ...)* where 630 The oriented C model (oriented pure Python models are not supported) 631 is called using *Iqabc(qa, qb, qc, par1, par2, ...)* where 617 632 *par1*, etc. are the parameters to the model. If the shape is rotationally 618 633 symmetric about *c* then *psi* is not needed, and the model is called … … 642 657 643 658 Using the $z, w$ values for Gauss-Legendre integration in "lib/gauss76.c", the 644 numerical integration is then:: 659 numerical integration is then: 660 661 .. code-block:: c 645 662 646 663 double outer_sum = 0.0; … … 718 735 to compute the proper magnetism and orientation, which you can implement 719 736 using *Iqxy(qx, qy, par1, par2, ...)*. 737 738 **Note: Magnetism is not supported in pure Python models.** 720 739 721 740 Special Functions … … 983 1002 memory, and wrong answers computed. The conclusion from a very long and 984 1003 strange debugging session was that any arrays that you declare in your 985 model should be a multiple of four. For example:: 1004 model should be a multiple of four. For example: 1005 1006 .. code-block:: c 986 1007 987 1008 double Iq(q, p1, p2, ...) … … 1015 1036 structure factor is the *hardsphere* interaction, which 1016 1037 uses the effective radius of the form factor as an input to the structure 1017 factor model. The effective radius is the average radius of the 1018 form averaged over all the polydispersity values. 1019 1020 :: 1021 1022 def ER(radius, thickness): 1023 """Effective radius of a core-shell sphere.""" 1024 return radius + thickness 1025 1026 Now consider the *core_shell_sphere*, which has a simple effective radius 1027 equal to the radius of the core plus the thickness of the shell, as 1028 shown above. Given polydispersity over *(r1, r2, ..., rm)* in radius and 1029 *(t1, t2, ..., tn)* in thickness, *ER* is called with a mesh 1030 grid covering all possible combinations of radius and thickness. 1031 That is, *radius* is *(r1, r2, ..., rm, r1, r2, ..., rm, ...)* 1032 and *thickness* is *(t1, t1, ... t1, t2, t2, ..., t2, ...)*. 1033 The *ER* function returns one effective radius for each combination. 1034 The effective radius calculator weights each of these according to 1035 the polydispersity distributions and calls the structure factor 1036 with the average *ER*. 1037 1038 :: 1039 1040 def VR(radius, thickness): 1041 """Sphere and shell volumes for a core-shell sphere.""" 1042 whole = 4.0/3.0 * pi * (radius + thickness)**3 1043 core = 4.0/3.0 * pi * radius**3 1044 return whole, whole - core 1045 1046 Core-shell type models have an additional volume ratio which scales 1047 the structure factor. The *VR* function returns the volume of 1048 the whole sphere and the volume of the shell. Like *ER*, there is 1049 one return value for each point in the mesh grid. 1050 1051 *NOTE: we may be removing or modifying this feature soon. As of the 1052 time of writing, core-shell sphere returns (1., 1.) for VR, giving a volume 1053 ratio of 1.0.* 1038 factor model. The effective radius is the weighted average over all 1039 values of the shape in polydisperse systems. 1040 1041 There can be many notions of effective radius, depending on the shape. For 1042 a sphere it is clearly just the radius, but for an ellipsoid of revolution 1043 we provide average curvature, equivalent sphere radius, minimum radius and 1044 maximum radius. These options are listed as *radius_effective_modes* in 1045 the python model defintion, and must be computed by the *radius_effective* 1046 function which takes the *radius_effective_mode* parameter as an integer, 1047 along with the various model parameters. Unlike normal C/Python arrays, 1048 the first mode is 1, the second is 2, etc. Mode 0 indicates that the 1049 effective radius from the shape is to be ignored in favour of the the 1050 effective radius parameter in the structure factor model. 1051 1052 1053 Consider the core-shell sphere, which defines the following effective radius 1054 modes in the python model:: 1055 1056 radius_effective_modes = [ 1057 "outer radius", 1058 "core radius", 1059 ] 1060 1061 and the following function in the C-file for the model: 1062 1063 .. code-block:: c 1064 1065 static double 1066 radius_effective(int mode, double radius, double thickness) 1067 { 1068 switch (mode) { 1069 case 0: return radius + thickness; 1070 case 1: return radius; 1071 default: return 0.; 1072 } 1073 } 1074 1075 static double 1076 form_volume(double radius, double thickness) 1077 { 1078 return M_4PI_3 * cube(radius + thickness); 1079 } 1080 1081 Given polydispersity over *(r1, r2, ..., rm)* in radius and *(t1, t2, ..., tn)* 1082 in thickness, *radius_effective* is called over a mesh grid covering all 1083 possible combinations of radius and thickness, with a single *(ri, tj)* pair 1084 in each call. The weights each of these results according to the 1085 polydispersity distributions and calls the structure factor with the average 1086 effective radius. Similarly, for *form_volume*. 1087 1088 Hollow models have an additional volume ratio which is needed to scale the 1089 structure factor. The structure factor uses the volume fraction of the filled 1090 particles as part of its density estimate, but the scale factor for the 1091 scattering intensity (as non-solvent volume fraction / volume) is determined 1092 by the shell volume only. Therefore the *shell_volume* function is 1093 needed to compute the form:shell volume ratio, which then scales the 1094 *volfraction* parameter prior to calling the structure factor calculator. 1095 In the case of a hollow sphere, this would be: 1096 1097 .. code-block:: c 1098 1099 static double 1100 shell_volume(double radius, double thickness) 1101 { 1102 double whole = M_4PI_3 * cube(radius + thickness); 1103 double core = M_4PI_3 * cube(radius); 1104 return whole - core; 1105 } 1106 1107 If *shell_volume* is not present, then *form_volume* and *shell_volume* are 1108 assumed to be equal, and the shape is considered solid. 1054 1109 1055 1110 Unit Tests … … 1108 1163 and a check that the model runs. 1109 1164 1110 If you are not using sasmodels from SasView, skip this step.1111 1112 1165 Recommended Testing 1113 1166 ................... 1167 1168 **NB: For now, this more detailed testing is only possible if you have a 1169 SasView build environment available!** 1114 1170 1115 1171 If the model compiles and runs, you can next run the unit tests that … … 1248 1304 | 2016-10-25 Steve King 1249 1305 | 2017-05-07 Paul Kienzle - Moved from sasview to sasmodels docs 1306 | 2019-03-28 Paul Kienzle - Update docs for radius_effective and shell_volume -
doc/guide/resolution.rst
r0db85af rdb1d9d5 1 .. sm_help.rst2 3 .. This is a port of the original SasView html help file to ReSTructured text4 .. by S King, ISIS, during SasView CodeCamp-III in Feb 2015.1 .. resolution.rst 2 3 .. This is a port of the original SasView html help file sm_help to ReSTructured 4 .. text by S King, ISIS, during SasView CodeCamp-III in Feb 2015. 5 5 6 6 … … 17 17 resolution contribution into a model calculation/simulation (which by definition 18 18 will be exact) to make it more representative of what has been measured 19 experimentally - a process called *smearing*. Sasmodels does the latter. 19 experimentally - a process called *smearing*. The Sasmodels component of SasView 20 does the latter. 20 21 21 22 Both smearing and desmearing rely on functions to describe the resolution … … 29 30 for the instrument and stored with the data file. If not, they will need to 30 31 be set manually before fitting. 32 33 .. note:: 34 Problems may be encountered if the data set loaded by SasView is a 35 concatenation of SANS data from several detector distances where, of 36 course, the worst Q resolution is next to the beam stop at each detector 37 distance. (This will also be noticeable in the residuals plot where 38 there will be poor overlap). SasView sensibly orders all the input 39 data points by increasing Q for nicer-looking plots, however, the dQ 40 data can then vary considerably from point to point. If 'Use dQ data' 41 smearing is selected then spikes may appear in the model fits, whereas 42 if 'None' or 'Custom Pinhole Smear' are selected the fits look normal. 43 44 In such instances, possible solutions are to simply remove the data 45 with poor Q resolution from the shorter detector distances, or to fit 46 the data from different detector distances simultaneously. 31 47 32 48
Note: See TracChangeset
for help on using the changeset viewer.