# Changeset b3f4831 in sasmodels

Ignore:
Timestamp:
Oct 30, 2018 11:07:41 AM (16 months ago)
Branches:
ticket_1156
Children:
cc8b183
Parents:
5778279 (diff), c6084f1 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into ticket_1156

Files:
22 edited

Unmodified
Removed
• ## doc/guide/magnetism/magnetism.rst

 rbefe905 ===========   ================================================================ M0:sld       $D_M M_0$ mtheta:sld   $\theta_M$ mphi:sld     $\phi_M$ up:angle     $\theta_\mathrm{up}$ up:frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up:frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample sld_M0       $D_M M_0$ sld_mtheta   $\theta_M$ sld_mphi     $\phi_M$ up_frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up_frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample up_angle     $\theta_\mathrm{up}$ ===========   ================================================================ .. note:: The values of the 'up:frac_i' and 'up:frac_f' must be in the range 0 to 1. The values of the 'up_frac_i' and 'up_frac_f' must be in the range 0 to 1. *Document History*
• ## doc/guide/plugin.rst

 rf796469 calculations, but instead rely on numerical integration to compute the appropriately smeared pattern. Each .py file also contains a function:: def random(): ... This function provides a model-specific random parameter set which shows model features in the USANS to SANS range.  For example, core-shell sphere sets the outer radius of the sphere logarithmically in [20, 20,000], which sets the Q value for the transition from flat to falling.  It then uses a beta distribution to set the percentage of the shape which is shell, giving a preference for very thin or very thick shells (but never 0% or 100%).  Using -sets=10 in sascomp should show a reasonable variety of curves over the default sascomp q range. The parameter set is returned as a dictionary of {parameter: value, ...}. Any model parameters not included in the dictionary will default according to the code in the _randomize_one() function from sasmodels/compare.py. Python Models erf, erfc, tgamma, lgamma:  **do not use** Special functions that should be part of the standard, but are missing or inaccurate on some platforms. Use sas_erf, sas_erfc and sas_gamma instead (see below). Note: lgamma(x) has not yet been tested. or inaccurate on some platforms. Use sas_erf, sas_erfc, sas_gamma and sas_lgamma instead (see below). Some non-standard constants and functions are also provided: Gamma function sas_gamma\ $(x) = \Gamma(x)$. The standard math function, tgamma(x) is unstable for $x < 1$ The standard math function, tgamma(x), is unstable for $x < 1$ on some platforms. :code:source = ["lib/sas_gamma.c", ...] (sas_gamma.c _) sas_gammaln(x): log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. The standard math function, lgamma(x), is incorrect for single precision on some platforms. :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_gammainc(a, x), sas_gammaincc(a, x): Incomplete gamma function sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ and complementary incomplete gamma function sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_erf(x), sas_erfc(x): If $n$ = 0 or 1, it uses sas_J0($x$) or sas_J1($x$), respectively. Warning: JN(n,x) can be very inaccurate (0.1%) for x not in [0.1, 100]. The standard math function jn(n, x) is not available on all platforms. Sine integral Si\ $(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. Warning: Si(x) can be very inaccurate (0.1%) for x in [0.1, 100]. This function uses Taylor series for small and large arguments: For large arguments, For large arguments use the following Taylor series, .. math:: - \frac{\sin(x)}{x}\left(\frac{1}{x} - \frac{3!}{x^3} + \frac{5!}{x^5} - \frac{7!}{x^7}\right) For small arguments, For small arguments , .. math::
• ## explore/precision.py

 r2a7e20e neg:    [-100,100] For arbitrary range use "start:stop:steps:scale" where scale is one of log, lin, or linear. *diff* is "relative", "absolute" or "none" linear = not xrange.startswith("log") if xrange == "zoom": lin_min, lin_max, lin_steps = 1000, 1010, 2000 start, stop, steps = 1000, 1010, 2000 elif xrange == "neg": lin_min, lin_max, lin_steps = -100.1, 100.1, 2000 start, stop, steps = -100.1, 100.1, 2000 elif xrange == "linear": lin_min, lin_max, lin_steps = 1, 1000, 2000 lin_min, lin_max, lin_steps = 0.001, 2, 2000 start, stop, steps = 1, 1000, 2000 start, stop, steps = 0.001, 2, 2000 elif xrange == "log": log_min, log_max, log_steps = -3, 5, 400 start, stop, steps = -3, 5, 400 elif xrange == "logq": log_min, log_max, log_steps = -4, 1, 400 start, stop, steps = -4, 1, 400 elif ':' in xrange: parts = xrange.split(':') linear = parts[3] != "log" if len(parts) == 4 else True steps = int(parts[2]) if len(parts) > 2 else 400 start = float(parts[0]) stop = float(parts[1]) else: raise ValueError("unknown range "+xrange) # value to x in the given precision. if linear: lin_min = max(lin_min, self.limits[0]) lin_max = min(lin_max, self.limits[1]) qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='single') #qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='double') start = max(start, self.limits[0]) stop = min(stop, self.limits[1]) qrf = np.linspace(start, stop, steps, dtype='single') #qrf = np.linspace(start, stop, steps, dtype='double') qr = [mp.mpf(float(v)) for v in qrf] #qr = mp.linspace(lin_min, lin_max, lin_steps) #qr = mp.linspace(start, stop, steps) else: log_min = np.log10(max(10**log_min, self.limits[0])) log_max = np.log10(min(10**log_max, self.limits[1])) qrf = np.logspace(log_min, log_max, log_steps, dtype='single') #qrf = np.logspace(log_min, log_max, log_steps, dtype='double') start = np.log10(max(10**start, self.limits[0])) stop = np.log10(min(10**stop, self.limits[1])) qrf = np.logspace(start, stop, steps, dtype='single') #qrf = np.logspace(start, stop, steps, dtype='double') qr = [mp.mpf(float(v)) for v in qrf] #qr = [10**v for v in mp.linspace(log_min, log_max, log_steps)] #qr = [10**v for v in mp.linspace(start, stop, steps)] target = self.call_mpmath(qr, bits=500) """ if diff == "relative": err = np.array([abs((t-a)/t) for t, a in zip(target, actual)], 'd') err = np.array([(abs((t-a)/t) if t != 0 else a) for t, a in zip(target, actual)], 'd') #err = np.clip(err, 0, 1) pylab.loglog(x, err, '-', label=label) return model_info # Hack to allow second parameter A in two parameter functions A = 1 def parse_extra_pars(): global A A_str = str(A) pop = [] for k, v in enumerate(sys.argv[1:]): if v.startswith("A="): A_str = v[2:] pop.append(k+1) if pop: sys.argv = [v for k, v in enumerate(sys.argv) if k not in pop] A = float(A_str) parse_extra_pars() # =============== FUNCTION DEFINITIONS ================ ocl_function=make_ocl("return sas_gamma(q);", "sas_gamma", ["lib/sas_gamma.c"]), limits=(-3.1, 10), ) add_function( name="gammaln(x)", mp_function=mp.loggamma, np_function=scipy.special.gammaln, ocl_function=make_ocl("return sas_gammaln(q);", "sas_gammaln", ["lib/sas_gammainc.c"]), #ocl_function=make_ocl("return lgamma(q);", "sas_gammaln"), ) add_function( name="gammainc(x)", mp_function=lambda x, a=A: mp.gammainc(a, a=0, b=x)/mp.gamma(a), np_function=lambda x, a=A: scipy.special.gammainc(a, x), ocl_function=make_ocl("return sas_gammainc(%.15g,q);"%A, "sas_gammainc", ["lib/sas_gammainc.c"]), ) add_function( name="gammaincc(x)", mp_function=lambda x, a=A: mp.gammainc(a, a=x, b=mp.inf)/mp.gamma(a), np_function=lambda x, a=A: scipy.special.gammaincc(a, x), ocl_function=make_ocl("return sas_gammaincc(%.15g,q);"%A, "sas_gammaincc", ["lib/sas_gammainc.c"]), ) add_function( lanczos_gamma = """\ const double coeff[] = { 76.18009172947146,     -86.50532032941677, 24.01409824083091,     -1.231739572450155, 76.18009172947146, -86.50532032941677, 24.01409824083091, -1.231739572450155, 0.1208650973866179e-2,-0.5395239384953e-5 }; """ add_function( name="log gamma(x)", name="loggamma(x)", mp_function=mp.loggamma, np_function=scipy.special.gammaln, ALL_FUNCTIONS = set(FUNCTIONS.keys()) ALL_FUNCTIONS.discard("loggamma")  # OCL version not ready yet ALL_FUNCTIONS.discard("loggamma")  # use cephes-based gammaln instead ALL_FUNCTIONS.discard("3j1/x:taylor") ALL_FUNCTIONS.discard("3j1/x:trig") -r indicates that the relative error should be plotted (default), -x indicates the steps in x, where is one of the following log indicates log stepping in [10^-3, 10^5] (default) logq indicates log stepping in [10^-4, 10^1] linear indicates linear stepping in [1, 1000] zoom indicates linear stepping in [1000, 1010] neg indicates linear stepping in [-100.1, 100.1] and name is "all" or one of: log indicates log stepping in [10^-3, 10^5] (default) logq indicates log stepping in [10^-4, 10^1] linear indicates linear stepping in [1, 1000] zoom indicates linear stepping in [1000, 1010] neg indicates linear stepping in [-100.1, 100.1] start:stop:n[:stepping] indicates an n-step plot in [start, stop] or [10^start, 10^stop] if stepping is "log" (default n=400) Some functions (notably gammainc/gammaincc) have an additional parameter A which can be set from the command line as A=value.  Default is A=1. Name is one of: """+names) sys.exit(1)
• ## sasmodels/__init__.py

 re65c3ba defining new models. """ __version__ = "0.97" __version__ = "0.98" def data_files():
• ## sasmodels/compare.py

 rbd7630d # Limit magnetic SLDs to a smaller range, from zero to iron=5/A^2 if par.name.startswith('M0:'): if par.name.endswith('_M0'): return np.random.uniform(0, 5) magnetic_pars = [] for p in parameters.user_parameters(pars, is2d): if any(p.id.startswith(x) for x in ('M0:', 'mtheta:', 'mphi:')): if any(p.id.endswith(x) for x in ('_M0', '_mtheta', '_mphi')): continue if p.id.startswith('up:'): pdtype=pars.get(p.id+"_pd_type", 'gaussian'), relative_pd=p.relative_pd, M0=pars.get('M0:'+p.id, 0.), mphi=pars.get('mphi:'+p.id, 0.), mtheta=pars.get('mtheta:'+p.id, 0.), M0=pars.get(p.id+'_M0', 0.), mphi=pars.get(p.id+'_mphi', 0.), mtheta=pars.get(p.id+'_mtheta', 0.), ) lines.append(_format_par(p.name, **fields)) if suppress: for p in pars: if p.startswith("M0:"): if p.endswith("_M0"): pars[p] = 0 else: first_mag = None for p in pars: if p.startswith("M0:"): if p.endswith("_M0"): any_mag |= (pars[p] != 0) if first_mag is None:
• ## sasmodels/convert.py

 r78f8308 if version < (4, 2, 0): oldpars = _hand_convert_4_1_to_4_2(name, oldpars) oldpars = _rename_magnetic_pars(oldpars) return oldpars oldpars['lattice_distortion'] = oldpars.pop('d_factor') return oldpars def _rename_magnetic_pars(pars): """ Change from M0:par to par_M0, etc. """ keys = list(pars.items()) for k in keys: if k.startswith('M0:'): pars[k[3:]+'_M0'] = pars.pop(k) elif k.startswith('mtheta:'): pars[k[7:]+'_mtheta'] = pars.pop(k) elif k.startswith('mphi:'): pars[k[5:]+'_mphi'] = pars.pop(k) elif k.startswith('up:'): pars['up_'+k[3:]] = pars.pop(k) return pars def _hand_convert_3_1_2_to_4_1(name, oldpars):

• ## sasmodels/kernelpy.py

 r108e70e self.info = model_info self.dtype = np.dtype('d') logger.info("load python model " + self.info.name) logger.info("make python model " + self.info.name) def make_kernel(self, q_vectors):

• ## sasmodels/modelinfo.py

 r7b9e4dd self.is_asymmetric = any(p.name == 'psi' for p in self.kernel_parameters) self.magnetism_index = [k for k, p in enumerate(self.call_parameters) if p.id.startswith('M0:')] if p.id.endswith('_M0')] self.pd_1d = set(p.name for p in self.call_parameters if self.nmagnetic > 0: full_list.extend([ Parameter('up:frac_i', '', 0., [0., 1.], Parameter('up_frac_i', '', 0., [0., 1.], 'magnetic', 'fraction of spin up incident'), Parameter('up:frac_f', '', 0., [0., 1.], Parameter('up_frac_f', '', 0., [0., 1.], 'magnetic', 'fraction of spin up final'), Parameter('up:angle', 'degrees', 0., [0., 360.], Parameter('up_angle', 'degrees', 0., [0., 360.], 'magnetic', 'spin up angle'), ]) for p in slds: full_list.extend([ Parameter('M0:'+p.id, '1e-6/Ang^2', 0., [-np.inf, np.inf], Parameter(p.id+'_M0', '1e-6/Ang^2', 0., [-np.inf, np.inf], 'magnetic', 'magnetic amplitude for '+p.description), Parameter('mtheta:'+p.id, 'degrees', 0., [-90., 90.], Parameter(p.id+'_mtheta', 'degrees', 0., [-90., 90.], 'magnetic', 'magnetic latitude for '+p.description), Parameter('mphi:'+p.id, 'degrees', 0., [-180., 180.], Parameter(p.id+'_mphi', 'degrees', 0., [-180., 180.], 'magnetic', 'magnetic longitude for '+p.description), ]) Parameters marked as sld will automatically have a set of associated magnetic parameters (m0:p, mtheta:p, mphi:p), as well as polarization information (up:theta, up:frac_i, up:frac_f). magnetic parameters (p_M0, p_mtheta, p_mphi), as well as polarization information (up_theta, up_frac_i, up_frac_f). """ # control parameters go first result.append(expanded_pars[name]) if is2d: for tag in 'M0:', 'mtheta:', 'mphi:': if tag+name in expanded_pars: result.append(expanded_pars[tag+name]) for tag in '_M0', '_mtheta', '_mphi': if name+tag in expanded_pars: result.append(expanded_pars[name+tag]) # Gather the user parameters in order append_group(p.id) if is2d and 'up:angle' in expanded_pars: if is2d and 'up_angle' in expanded_pars: result.extend([ expanded_pars['up:frac_i'], expanded_pars['up:frac_f'], expanded_pars['up:angle'], expanded_pars['up_frac_i'], expanded_pars['up_frac_f'], expanded_pars['up_angle'], ]) info.structure_factor = getattr(kernel_module, 'structure_factor', False) info.profile_axes = getattr(kernel_module, 'profile_axes', ['x', 'y']) # Note: custom.load_custom_kernel_module assumes the C sources are defined # by this attribute. info.source = getattr(kernel_module, 'source', []) info.c_code = getattr(kernel_module, 'c_code', None) for k in range(control+1, p.length+1) if p.length > 1) for p in self.parameters.kernel_parameters: if p.length > 1 and p.type == "sld": for k in range(control+1, p.length+1): base = p.id+str(k) hidden.update((base+"_M0", base+"_mtheta", base+"_mphi")) return hidden
• ## sasmodels/models/bcc_paracrystal.py

 re7e9231 r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- I(q) = \frac{\text{scale}}{V_p} V_\text{lattice} P(q) Z(q) where *scale* is the volume fraction of spheres, $V_p$ is the volume of the Authorship and Verification ---------------------------- --------------------------- * **Author:** NIST IGOR/DANSE **Date:** pre 2010
• ## sasmodels/models/be_polyelectrolyte.py

 ref07e95 r""" .. note:: Please read the Validation section below. Definition ---------- I(q) = K\frac{q^2+k^2}{4\pi L_b\alpha ^2} \frac{1}{1+r_{0}^2(q^2+k^2)(q^2-12hC_a/b^2)} + background \frac{1}{1+r_{0}^4(q^2+k^2)(q^2-12hC_a/b^2)} + background k^2 = 4\pi L_b(2C_s + \alpha C_a) r_{0}^2 = \frac{1}{\alpha \sqrt{C_a} \left( b/\sqrt{48\pi L_b}\right)} r_{0}^2 = \frac{b}{\alpha \sqrt{C_a 48\pi L_b}} where $K$ is the contrast factor for the polymer which is defined differently than in other models and is given in barns where $1 barn = 10^{-24} cm^2$.  $K$ is other models and is given in barns where 1 $barn = 10^{-24}$ $cm^2$.  $K$ is defined as: a = b_p - (v_p/v_s) b_s where $b_p$ and $b_s$ are sum of the scattering lengths of the atoms constituting the monomer of the polymer and the sum of the scattering lengths of the atoms constituting the solvent molecules respectively, and $v_p$ and $v_s$ are the partial molar volume of the polymer and the solvent respectively $L_b$ is the Bjerrum length(|Ang|) - **Note:** This parameter needs to be kept constant for a given solvent and temperature! $h$ is the virial parameter (|Ang^3|/mol) - **Note:** See [#Borue]_ for the correct interpretation of this parameter.  It incorporates second and third virial coefficients and can be Negative. $b$ is the monomer length(|Ang|), $C_s$ is the concentration of monovalent salt(mol/L), $\alpha$ is the ionization degree (ionization degree : ratio of charged monomers  to total number of monomers), $C_a$ is the polymer molar concentration(mol/L), and $background$ is the incoherent background. where: - $b_p$ and $b_s$ are **sum of the scattering lengths of the atoms** constituting the polymer monomer and the solvent molecules, respectively. - $v_p$ and $v_s$ are the partial molar volume of the polymer and the solvent, respectively. - $L_b$ is the Bjerrum length (|Ang|) - **Note:** This parameter needs to be kept constant for a given solvent and temperature! - $h$ is the virial parameter (|Ang^3|) - **Note:** See [#Borue]_ for the correct interpretation of this parameter.  It incorporates second and third virial coefficients and can be *negative*. - $b$ is the monomer length (|Ang|). - $C_s$ is the concentration of monovalent salt(1/|Ang^3| - internally converted from mol/L). - $\alpha$ is the degree of ionization (the ratio of charged monomers to the total number of monomers) - $C_a$ is the polymer molar concentration (1/|Ang^3| - internally converted from mol/L) - $background$ is the incoherent background. For 2D data the scattering intensity is calculated in the same way as 1D, q = \sqrt{q_x^2 + q_y^2} Validation ---------- As of the last revision, this code is believed to be correct.  However it needs further validation and should be used with caution at this time.  The history of this code goes back to a 1998 implementation. It was recently noted that in that implementation, while both the polymer concentration and salt concentration were converted from experimental units of mol/L to more dimensionally useful units of 1/|Ang^3|, only the converted version of the polymer concentration was actually being used in the calculation while the unconverted salt concentration (still in apparent units of mol/L) was being used.  This was carried through to Sasmodels as used for SasView 4.1 (though the line of code converting the salt concentration to the new units was removed somewhere along the line). Simple dimensional analysis of the calculation shows that the converted salt concentration should be used, which the original code suggests was the intention, so this has now been corrected (for SasView 4.2). Once better validation has been performed this note will be removed. References * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Paul Kienzle **Date:** July 24, 2016 * **Last Reviewed by:** Paul Butler and Richard Heenan **Date:** October 07, 2016 * **Last Modified by:** Paul Butler **Date:** September 25, 2018 * **Last Reviewed by:** Paul Butler **Date:** September 25, 2018 """ ["contrast_factor",       "barns",   10.0,  [-inf, inf], "", "Contrast factor of the polymer"], ["bjerrum_length",        "Ang",      7.1,  [0, inf],    "", "Bjerrum length"], ["virial_param",          "Ang^3/mol", 12.0,  [-inf, inf], "", "Virial parameter"], ["virial_param",          "Ang^3", 12.0,  [-inf, inf], "", "Virial parameter"], ["monomer_length",        "Ang",     10.0,  [0, inf],    "", "Monomer length"], ["salt_concentration",    "mol/L",    0.0,  [-inf, inf], "", "Concentration of monovalent salt"], def Iq(q, contrast_factor=10.0, bjerrum_length=7.1, virial_param=12.0, monomer_length=10.0, salt_concentration=0.0, ionization_degree=0.05, polymer_concentration=0.7): contrast_factor, bjerrum_length, virial_param, monomer_length, salt_concentration, ionization_degree, polymer_concentration): """ :param q:                     Input q-value :param contrast_factor:       Contrast factor of the polymer :param bjerrum_length:        Bjerrum length :param virial_param:          Virial parameter :param monomer_length:        Monomer length :param salt_concentration:    Concentration of monovalent salt :param ionization_degree:     Degree of ionization :param polymer_concentration: Polymer molar concentration :return:                      1-D intensity :params: see parameter table :return: 1-D form factor for polyelectrolytes in low salt parameter names, units, default values, and behavior (volume, sld etc) are defined in the parameter table.  The concentrations are converted from experimental mol/L to dimensionaly useful 1/A3 in first two lines """ concentration = polymer_concentration * 6.022136e-4 k_square = 4.0 * pi * bjerrum_length * (2*salt_concentration + ionization_degree * concentration) r0_square = 1.0/ionization_degree/sqrt(concentration) * \ concentration_pol = polymer_concentration * 6.022136e-4 concentration_salt = salt_concentration * 6.022136e-4 k_square = 4.0 * pi * bjerrum_length * (2*concentration_salt + ionization_degree * concentration_pol) r0_square = 1.0/ionization_degree/sqrt(concentration_pol) * \ (monomer_length/sqrt((48.0*pi*bjerrum_length))) term2 = 1.0 + r0_square**2 * (q**2 + k_square) * \ (q**2 - (12.0 * virial_param * concentration/(monomer_length**2))) (q**2 - (12.0 * virial_param * concentration_pol/(monomer_length**2))) return term1/term2 # Accuracy tests based on content in test/utest_other_models.py # Note that these should some day be validated beyond this self validation # (circular reasoning). -- i.e. the "good value," at least for those with # non zero salt concentrations, were obtained by running the current # model in SasView and copying the appropriate result here. #    PDB -- Sep 26, 2018 [{'contrast_factor':       10.0, 'bjerrum_length':         7.1, }, 0.001, 0.0948379], # Additional tests with larger range of parameters [{'contrast_factor':       10.0, 'bjerrum_length':       100.0, 'virial_param':           3.0, 'monomer_length':         1.0, 'salt_concentration':    10.0, 'ionization_degree':      2.0, 'polymer_concentration': 10.0, 'monomer_length':         5.0, 'salt_concentration':     1.0, 'ionization_degree':      0.1, 'polymer_concentration':  1.0, 'background':             0.0, }, 0.1, -3.75693800588], }, 0.1, 0.253469484], [{'contrast_factor':       10.0, 'bjerrum_length':       100.0, 'virial_param':           3.0, 'monomer_length':         1.0, 'salt_concentration':    10.0, 'ionization_degree':      2.0, 'polymer_concentration': 10.0, 'background':           100.0 }, 5.0, 100.029142149], 'monomer_length':         5.0, 'salt_concentration':     1.0, 'ionization_degree':      0.1, 'polymer_concentration':  1.0, 'background':             1.0, }, 0.05, 1.738358122], [{'contrast_factor':     100.0, 'bjerrum_length':       10.0, 'virial_param':        180.0, 'monomer_length':        1.0, 'virial_param':         12.0, 'monomer_length':       10.0, 'salt_concentration':    0.1, 'ionization_degree':     0.5, 'polymer_concentration': 0.1, 'background':             0.0, }, 200., 1.80664667511e-06], 'background':           0.01, }, 0.5, 0.012881893], ]
• ## sasmodels/models/fcc_paracrystal.py

 re7e9231 #note - calculation requires double precision r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- negligible, and the size of the paracrystal is infinitely large. Paracrystalline distortion is assumed to be isotropic and characterized by a Gaussian distribution. a Gaussian distribution. The scattering intensity $I(q)$ is calculated as
• ## sasmodels/models/sc_paracrystal.py

 r0b906ea r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- by a Gaussian distribution. he scattering intensity $I(q)$ is calculated as The scattering intensity $I(q)$ is calculated as .. math:: Equation (16) of the 1987 reference\ [#CIT1987]_ is used to calculate $Z(q)$, using equations (13)-(15) from the 1987 paper\ [#CIT1987]_ for Z1, Z2, and Z3. using equations (13)-(15) from the 1987 paper\ [#CIT1987]_ for $Z1$, $Z2$, and $Z3$. The lattice correction (the occupied volume of the lattice) for a simple cubic
• ## sasmodels/models/spinodal.py

 r475ff58 where $x=q/q_0$, $q_0$ is the peak position, $I_{max}$ is the intensity at $q_0$ (parameterised as the $scale$ parameter), and $B$ is a flat background. The spinodal wavelength is given by $2\pi/q_0$. background. The spinodal wavelength, $\Lambda$, is given by $2\pi/q_0$. The definition of $I_{max}$ in the literature varies. Hashimoto *et al* (1991) define it as .. math:: I_{max} = \Lambda^3\Delta\rho^2 whereas Meier & Strobl (1987) give .. math:: I_{max} = V_z\Delta\rho^2 where $V_z$ is the volume per monomer unit. The exponent $\gamma$ is equal to $d+1$ for off-critical concentration H. Furukawa. Dynamics-scaling theory for phase-separating unmixing mixtures: Growth rates of droplets and scaling properties of autocorrelation functions. Physica A 123,497 (1984). Growth rates of droplets and scaling properties of autocorrelation functions. Physica A 123, 497 (1984). H. Meier & G. Strobl. Small-Angle X-ray Scattering Study of Spinodal Decomposition in Polystyrene/Poly(styrene-co-bromostyrene) Blends. Macromolecules 20, 649-654 (1987). T. Hashimoto, M. Takenaka & H. Jinnai. Scattering Studies of Self-Assembling Processes of Polymer Blends in Spinodal Decomposition. J. Appl. Cryst. 24, 457-466 (1991). Revision History * **Author:**  Dirk Honecker **Date:** Oct 7, 2016 * **Revised:** Steve King    **Date:** Sep 7, 2018 * **Revised:** Steve King    **Date:** Oct 25, 2018 """
• ## sasmodels/sasview_model.py

 rd533590 #: set of defined models (standard and custom) MODELS = {}  # type: Dict[str, SasviewModelType] # TODO: remove unused MODEL_BY_PATH cache once sasview no longer references it #: custom model {path: model} mapping so we can check timestamps MODEL_BY_PATH = {}  # type: Dict[str, SasviewModelType] #: Track modules that we have loaded so we can determine whether the model #: has changed since we last reloaded. _CACHED_MODULE = {}  # type: Dict[str, "module"] def find_model(modelname): Load a custom model given the model path. """ model = MODEL_BY_PATH.get(path, None) if model is not None and model.timestamp == getmtime(path): #logger.info("Model already loaded %s", path) return model #logger.info("Loading model %s", path) # Load the kernel module.  This may already be cached by the loader, so # only requires checking the timestamps of the dependents. kernel_module = custom.load_custom_kernel_module(path) if hasattr(kernel_module, 'Model'): model = kernel_module.Model # Check if the module has changed since we last looked. reloaded = kernel_module != _CACHED_MODULE.get(path, None) _CACHED_MODULE[path] = kernel_module # Turn the module into a model.  We need to do this in even if the # model has already been loaded so that we can determine the model # name and retrieve it from the MODELS cache. model = getattr(kernel_module, 'Model', None) if model is not None: # Old style models do not set the name in the class attributes, so # set it here; this name will be overridden when the object is created model_info = modelinfo.make_model_info(kernel_module) model = make_model_from_info(model_info) model.timestamp = getmtime(path) # If a model name already exists and we are loading a different model, _previous_name, model.name, model.filename) MODELS[model.name] = model MODEL_BY_PATH[path] = model return model # Only update the model if the module has changed if reloaded or model.name not in MODELS: MODELS[model.name] = model return MODELS[model.name] hidden.add('background') self._model_info.parameters.defaults['background'] = 0. # Update the parameter lists to exclude any hidden parameters self.magnetic_params = tuple(pname for pname in self.magnetic_params if pname not in hidden) self.orientation_params = tuple(pname for pname in self.orientation_params if pname not in hidden) self._persistency_dict = {} return value, [value], [1.0] @classmethod def runTests(cls): """ Run any tests built into the model and captures the test output. Returns success flag and output """ from .model_test import check_model return check_model(cls._model_info) def test_cylinder(): # type: () -> float Model = _make_standard_model('sphere') model = Model() model.setParam('M0:sld', 8) model.setParam('sld_M0', 8) q = np.linspace(-0.35, 0.35, 500) qx, qy = np.meshgrid(q, q)
• ## sasmodels/special.py

 rdf69efa The standard math function, tgamma(x) is unstable for $x < 1$ on some platforms. sas_gammaln(x): log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. The standard math function, lgamma(x), is incorrect for single precision on some platforms. sas_gammainc(a, x), sas_gammaincc(a, x): Incomplete gamma function sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ and complementary incomplete gamma function sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ sas_erf(x), sas_erfc(x): from numpy import pi, nan, inf from scipy.special import gamma as sas_gamma from scipy.special import gammaln as sas_gammaln from scipy.special import gammainc as sas_gammainc from scipy.special import gammaincc as sas_gammaincc from scipy.special import erf as sas_erf from scipy.special import erfc as sas_erfc
• ## setup.py

 r1f991d6 return version[1:-1] raise RuntimeError("Could not read version from %s/__init__.py"%package) install_requires = ['numpy', 'scipy'] if sys.platform=='win32' or sys.platform=='cygwin': install_requires.append('tinycc') setup( 'sasmodels': ['*.c', '*.cl'], }, install_requires=[ ], install_requires=install_requires, extras_require={ 'full': ['docutils', 'bumps', 'matplotlib'], 'server': ['bumps'], 'OpenCL': ["pyopencl"], 'Bumps': ["bumps"], 'TinyCC': ["tinycc"], }, build_requires=['setuptools'],
• ## explore/realspace.py

 r362a66f import numpy as np from numpy import pi, radians, sin, cos, sqrt from numpy.random import poisson, uniform, randn, rand from numpy.random import poisson, uniform, randn, rand, randint from numpy.polynomial.legendre import leggauss from scipy.integrate import simps I3 = np.matrix([[1., 0, 0], [0, 1, 0], [0, 0, 1]]) class Shape: rotation = np.matrix([[1., 0, 0], [0, 1, 0], [0, 0, 1]]) rotation = I3 center = np.array([0., 0., 0.])[:, None] r_max = None lattice_size = np.array((1, 1, 1)) lattice_spacing = np.array((1., 1., 1.)) lattice_distortion = 0.0 lattice_rotation = 0.0 lattice_type = "" def volume(self): def rotate(self, theta, phi, psi): self.rotation = rotation(theta, phi, psi) * self.rotation if theta != 0. or phi != 0. or psi != 0.: self.rotation = rotation(theta, phi, psi) * self.rotation return self return self def lattice(self, size=(1, 1, 1), spacing=(2, 2, 2), type="sc", distortion=0.0, rotation=0.0): self.lattice_size = np.asarray(size, 'i') self.lattice_spacing = np.asarray(spacing, 'd') self.lattice_type = type self.lattice_distortion = distortion self.lattice_rotation = rotation def _adjust(self, points): points = np.asarray(self.rotation * np.matrix(points.T)) + self.center if self.rotation is I3: points = points.T + self.center else: points = np.asarray(self.rotation * np.matrix(points.T)) + self.center if self.lattice_type: points = self._apply_lattice(points) return points.T def r_bins(self, q, over_sampling=1, r_step=0.): r_max = min(2 * pi / q[0], self.r_max) def r_bins(self, q, over_sampling=10, r_step=0.): if self.lattice_type: r_max = np.sqrt(np.sum(self.lattice_size*self.lattice_spacing*self.dims)**2)/2 else: r_max = self.r_max #r_max = min(2 * pi / q[0], r_max) if r_step == 0.: r_step = 2 * pi / q[-1] / over_sampling #r_step = 0.01 return np.arange(r_step, r_max, r_step) def _apply_lattice(self, points): """Spread points to different lattice positions""" size = self.lattice_size spacing = self.lattice_spacing shuffle = self.lattice_distortion rotate = self.lattice_rotation lattice = self.lattice_type if rotate != 0: # To vectorize the rotations we will need to unwrap the matrix multiply raise NotImplementedError("don't handle rotations yet") # Determine the number of lattice points in the lattice shapes_per_cell = 2 if lattice == "bcc" else 4 if lattice == "fcc" else 1 number_of_lattice_points = np.prod(size) * shapes_per_cell # For each point in the original shape, figure out which lattice point # to translate it to.  This is both cell index (i*ny*nz + j*nz  + k) as # well as the point in the cell (corner, body center or face center). nsamples = points.shape[1] lattice_point = randint(number_of_lattice_points, size=nsamples) # Translate the cell index into the i,j,k coordinates of the senter cell_index = lattice_point // shapes_per_cell center = np.vstack((cell_index//(size[1]*size[2]), (cell_index%(size[1]*size[2]))//size[2], cell_index%size[2])) center = np.asarray(center, dtype='d') if lattice == "bcc": center[:, lattice_point % shapes_per_cell == 1] += [[0.5], [0.5], [0.5]] elif lattice == "fcc": center[:, lattice_point % shapes_per_cell == 1] += [[0.0], [0.5], [0.5]] center[:, lattice_point % shapes_per_cell == 2] += [[0.5], [0.0], [0.5]] center[:, lattice_point % shapes_per_cell == 3] += [[0.5], [0.5], [0.0]] # Each lattice point has its own displacement from the ideal position. # Not checking that shapes do not overlap if displacement is too large. offset = shuffle*(randn(3, number_of_lattice_points) if shuffle < 0.3 else rand(3, number_of_lattice_points)) center += offset[:, cell_index] # Each lattice point has its own rotation.  Rotate the point prior to # applying any displacement. # rotation = rotate*(randn(size=(shapes, 3)) if shuffle < 30 else rand(size=(nsamples, 3))) # for k in shapes: points[k] = rotation[k]*points[k] points += center*(np.array([spacing])*np.array(self.dims)).T return points class Composite(Shape): Iq = 100 * np.ones_like(qx) data = Data2D(x=qx, y=qy, z=Iq, dx=None, dy=None, dz=np.sqrt(Iq)) data.x_bins = qx[0,:] data.y_bins = qy[:,0] data.x_bins = qx[0, :] data.y_bins = qy[:, 0] data.filename = "fake data" return shape, fn, fn_xy def build_sphere(radius=125, rho=2): DEFAULT_SPHERE_RADIUS = 125 DEFAULT_SPHERE_CONTRAST = 2 def build_sphere(radius=DEFAULT_SPHERE_RADIUS, rho=DEFAULT_SPHERE_CONTRAST): shape = TriaxialEllipsoid(radius, radius, radius, rho) fn = lambda q: sphere_Iq(q, radius)*rho**2 return shape, fn, fn_xy def build_cubic_lattice(shape, nx=1, ny=1, nz=1, dx=2, dy=2, dz=2, shuffle=0, rotate=0): def build_sc_lattice(shape, nx=1, ny=1, nz=1, dx=2, dy=2, dz=2, shuffle=0, rotate=0): a, b, c = shape.dims shapes = [copy(shape) corners= [copy(shape) .shift((ix+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, for iy in range(ny) for iz in range(nz)] lattice = Composite(shapes) lattice = Composite(corners) return lattice def build_bcc_lattice(shape, nx=1, ny=1, nz=1, dx=2, dy=2, dz=2, shuffle=0, rotate=0): a, b, c = shape.dims corners = [copy(shape) .shift((ix+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] centers = [copy(shape) .shift((ix+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] lattice = Composite(corners + centers) return lattice def build_fcc_lattice(shape, nx=1, ny=1, nz=1, dx=2, dy=2, dz=2, shuffle=0, rotate=0): a, b, c = shape.dims corners = [copy(shape) .shift((ix+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] faces_a = [copy(shape) .shift((ix+0.0+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] faces_b = [copy(shape) .shift((ix+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+0.0+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] faces_c = [copy(shape) .shift((ix+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dx*a, (iy+0.5+(randn() if shuffle < 0.3 else rand())*shuffle)*dy*b, (iz+0.0+(randn() if shuffle < 0.3 else rand())*shuffle)*dz*c) .rotate(*((randn(3) if rotate < 30 else rand(3))*rotate)) for ix in range(nx) for iy in range(ny) for iz in range(nz)] lattice = Composite(corners + faces_a + faces_b + faces_c) return lattice SHAPE_FUNCTIONS = OrderedDict([ ]) SHAPES = list(SHAPE_FUNCTIONS.keys()) LATTICE_FUNCTIONS = OrderedDict([ ("sc", build_sc_lattice), ("bcc", build_bcc_lattice), ("fcc", build_fcc_lattice), ]) LATTICE_TYPES = list(LATTICE_FUNCTIONS.keys()) def check_shape(title, shape, fn=None, show_points=False, r = shape.r_bins(q, r_step=r_step) sampling_density = samples / shape.volume print("sampling points") rho, points = shape.sample(sampling_density) print("calculating Pr") t0 = time.time() Pr = calc_Pr(r, rho-rho_solvent, points) import pylab if show_points: plot_points(rho, points); pylab.figure() plot_points(rho, points); pylab.figure() plot_calc(r, Pr, q, Iq, theory=theory, title=title) pylab.gcf().canvas.set_window_title(title) Qx, Qy = np.meshgrid(qx, qy) sampling_density = samples / shape.volume print("sampling points") t0 = time.time() rho, points = shape.sample(sampling_density) help='lattice size') parser.add_argument('-z', '--spacing', type=str, default='2,2,2', help='lattice spacing') help='lattice spacing (relative to shape)') parser.add_argument('-t', '--type', choices=LATTICE_TYPES, default=LATTICE_TYPES[0], help='lattice type') parser.add_argument('-r', '--rotate', type=float, default=0., help="rotation relative to lattice, gaussian < 30 degrees, uniform otherwise") nx, ny, nz = [int(v) for v in opts.lattice.split(',')] dx, dy, dz = [float(v) for v in opts.spacing.split(',')] shuffle, rotate = opts.shuffle, opts.rotate distortion, rotation = opts.shuffle, opts.rotate shape, fn, fn_xy = SHAPE_FUNCTIONS[opts.shape](**pars) if nx > 1 or ny > 1 or nz > 1: shape = build_cubic_lattice(shape, nx, ny, nz, dx, dy, dz, shuffle, rotate) view = tuple(float(v) for v in opts.view.split(',')) # If comparing a sphere in a cubic lattice, compare against the # corresponding paracrystalline model. if opts.shape == "sphere" and dx == dy == dz and nx*ny*nz > 1: radius = pars.get('radius', DEFAULT_SPHERE_RADIUS) model_name = opts.type + "_paracrystal" model_pars = { "scale": 1., "background": 0., "lattice_spacing": 2*radius*dx, "lattice_distortion": distortion, "radius": radius, "sld": pars.get('rho', DEFAULT_SPHERE_CONTRAST), "sld_solvent": 0., "theta": view[0], "phi": view[1], "psi": view[2], } fn, fn_xy = wrap_sasmodel(model_name, **model_pars) if nx*ny*nz > 1: if rotation != 0: print("building %s lattice"%opts.type) build_lattice = LATTICE_FUNCTIONS[opts.type] shape = build_lattice(shape, nx, ny, nz, dx, dy, dz, distortion, rotation) else: shape.lattice(size=(nx, ny, nz), spacing=(dx, dy, dz), type=opts.type, rotation=rotation, distortion=distortion) title = "%s(%s)" % (opts.shape, " ".join(opts.pars)) if opts.dim == 1: mesh=opts.mesh, qmax=opts.qmax, samples=opts.samples) else: view = tuple(float(v) for v in opts.view.split(',')) check_shape_2d(title, shape, fn_xy, view=view, show_points=opts.plot, mesh=opts.mesh, qmax=opts.qmax, samples=opts.samples)
• ## sasmodels/models/bcc_paracrystal.c

 r108e70e static double bcc_Zq(double qa, double qb, double qc, double dnn, double d_factor) bcc_Zq(double qa, double qb, double qc, double lattice_spacing, double lattice_distortion) { // Equations from Matsuoka 26-27-28, multiplied by |q| //         => exp(a)^2 - 2 cos(d ak) exp(a) + 1) //         => (exp(a) - 2 cos(d ak)) * exp(a) + 1 const double arg = -0.5*square(dnn*d_factor)*(a1*a1 + a2*a2 + a3*a3); const double arg = -0.5*square(lattice_spacing*lattice_distortion)*(a1*a1 + a2*a2 + a3*a3); const double exp_arg = exp(arg); const double Zq = -cube(expm1(2.0*arg)) / ( ((exp_arg - 2.0*cos(dnn*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a3))*exp_arg + 1.0)); / ( ((exp_arg - 2.0*cos(lattice_spacing*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a3))*exp_arg + 1.0)); #elif 0 //            = tanh(-a) / [1 - cos(d a_k)/cosh(-a)] // const double arg = 0.5*square(dnn*d_factor)*(a1*a1 + a2*a2 + a3*a3); const double arg = 0.5*square(lattice_spacing*lattice_distortion)*(a1*a1 + a2*a2 + a3*a3); const double sinh_qd = sinh(arg); const double cosh_qd = cosh(arg); const double Zq = sinh_qd/(cosh_qd - cos(dnn*a1)) * sinh_qd/(cosh_qd - cos(dnn*a2)) * sinh_qd/(cosh_qd - cos(dnn*a3)); const double Zq = sinh_qd/(cosh_qd - cos(lattice_spacing*a1)) * sinh_qd/(cosh_qd - cos(lattice_spacing*a2)) * sinh_qd/(cosh_qd - cos(lattice_spacing*a3)); #else const double arg = 0.5*square(dnn*d_factor)*(a1*a1 + a2*a2 + a3*a3); const double arg = 0.5*square(lattice_spacing*lattice_distortion)*(a1*a1 + a2*a2 + a3*a3); const double tanh_qd = tanh(arg); const double cosh_qd = cosh(arg); const double Zq = tanh_qd/(1.0 - cos(dnn*a1)/cosh_qd) * tanh_qd/(1.0 - cos(dnn*a2)/cosh_qd) * tanh_qd/(1.0 - cos(dnn*a3)/cosh_qd); const double Zq = tanh_qd/(1.0 - cos(lattice_spacing*a1)/cosh_qd) * tanh_qd/(1.0 - cos(lattice_spacing*a2)/cosh_qd) * tanh_qd/(1.0 - cos(lattice_spacing*a3)/cosh_qd); #endif // occupied volume fraction calculated from lattice symmetry and sphere radius static double bcc_volume_fraction(double radius, double dnn) bcc_volume_fraction(double radius, double lattice_spacing) { return 2.0*sphere_volume(sqrt(0.75)*radius/dnn); return 2.0*sphere_volume(radius/lattice_spacing); } static double Iq(double q, double dnn, double d_factor, double radius, static double Iq(double q, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double qa = qab*cos_phi; const double qb = qab*sin_phi; const double form = bcc_Zq(qa, qb, qc, dnn, d_factor); const double form = bcc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); inner_sum += GAUSS_W[j] * form; } const double Zq = outer_sum/(4.0*M_PI); const double Pq = sphere_form(q, radius, sld, solvent_sld); return bcc_volume_fraction(radius, dnn) * Pq * Zq; return bcc_volume_fraction(radius, lattice_spacing) * Pq * Zq; } static double Iqabc(double qa, double qb, double qc, double dnn, double d_factor, double radius, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double q = sqrt(qa*qa + qb*qb + qc*qc); const double Zq = bcc_Zq(qa, qb, qc, dnn, d_factor); const double Zq = bcc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); const double Pq = sphere_form(q, radius, sld, solvent_sld); return bcc_volume_fraction(radius, dnn) * Pq * Zq; return bcc_volume_fraction(radius, lattice_spacing) * Pq * Zq; }
• ## sasmodels/models/fcc_paracrystal.c

 r108e70e static double fcc_Zq(double qa, double qb, double qc, double dnn, double d_factor) fcc_Zq(double qa, double qb, double qc, double lattice_spacing, double lattice_distortion) { // Equations from Matsuoka 17-18-19, multiplied by |q| //         => exp(a)^2 - 2 cos(d ak) exp(a) + 1) //         => (exp(a) - 2 cos(d ak)) * exp(a) + 1 const double arg = -0.5*square(dnn*d_factor)*(a1*a1 + a2*a2 + a3*a3); const double arg = -0.5*square(lattice_spacing*lattice_distortion)*(a1*a1 + a2*a2 + a3*a3); const double exp_arg = exp(arg); const double Zq = -cube(expm1(2.0*arg)) / ( ((exp_arg - 2.0*cos(dnn*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a3))*exp_arg + 1.0)); / ( ((exp_arg - 2.0*cos(lattice_spacing*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a3))*exp_arg + 1.0)); return Zq; // occupied volume fraction calculated from lattice symmetry and sphere radius static double fcc_volume_fraction(double radius, double dnn) fcc_volume_fraction(double radius, double lattice_spacing) { return 4.0*sphere_volume(M_SQRT1_2*radius/dnn); return 4.0*sphere_volume(radius/lattice_spacing); } static double Iq(double q, double dnn, double d_factor, double radius, static double Iq(double q, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double qa = qab*cos_phi; const double qb = qab*sin_phi; const double form = fcc_Zq(qa, qb, qc, dnn, d_factor); const double form = fcc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); inner_sum += GAUSS_W[j] * form; } const double Pq = sphere_form(q, radius, sld, solvent_sld); return fcc_volume_fraction(radius, dnn) * Pq * Zq; return fcc_volume_fraction(radius, lattice_spacing) * Pq * Zq; } static double Iqabc(double qa, double qb, double qc, double dnn, double d_factor, double radius, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double q = sqrt(qa*qa + qb*qb + qc*qc); const double Pq = sphere_form(q, radius, sld, solvent_sld); const double Zq = fcc_Zq(qa, qb, qc, dnn, d_factor); return fcc_volume_fraction(radius, dnn) * Pq * Zq; const double Zq = fcc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); return fcc_volume_fraction(radius, lattice_spacing) * Pq * Zq; }
• ## sasmodels/models/sc_paracrystal.c

 r108e70e static double sc_Zq(double qa, double qb, double qc, double dnn, double d_factor) sc_Zq(double qa, double qb, double qc, double lattice_spacing, double lattice_distortion) { // Equations from Matsuoka 9-10-11, multiplied by |q| //         => exp(a)^2 - 2 cos(d ak) exp(a) + 1) //         => (exp(a) - 2 cos(d ak)) * exp(a) + 1 const double arg = -0.5*square(dnn*d_factor)*(a1*a1 + a2*a2 + a3*a3); const double arg = -0.5*square(lattice_spacing*lattice_distortion)*(a1*a1 + a2*a2 + a3*a3); const double exp_arg = exp(arg); const double Zq = -cube(expm1(2.0*arg)) / ( ((exp_arg - 2.0*cos(dnn*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(dnn*a3))*exp_arg + 1.0)); / ( ((exp_arg - 2.0*cos(lattice_spacing*a1))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a2))*exp_arg + 1.0) * ((exp_arg - 2.0*cos(lattice_spacing*a3))*exp_arg + 1.0)); return Zq; // occupied volume fraction calculated from lattice symmetry and sphere radius static double sc_volume_fraction(double radius, double dnn) sc_volume_fraction(double radius, double lattice_spacing) { return sphere_volume(radius/dnn); return sphere_volume(radius/lattice_spacing); } static double Iq(double q, double dnn, double d_factor, double radius, Iq(double q, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double qa = qab*cos_phi; const double qb = qab*sin_phi; const double form = sc_Zq(qa, qb, qc, dnn, d_factor); const double form = sc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); inner_sum += GAUSS_W[j] * form; } const double Pq = sphere_form(q, radius, sld, solvent_sld); return sc_volume_fraction(radius, dnn) * Pq * Zq; return sc_volume_fraction(radius, lattice_spacing) * Pq * Zq; } static double Iqabc(double qa, double qb, double qc, double dnn, double d_factor, double radius, double lattice_spacing, double lattice_distortion, double radius, double sld, double solvent_sld) { const double q = sqrt(qa*qa + qb*qb + qc*qc); const double Pq = sphere_form(q, radius, sld, solvent_sld); const double Zq = sc_Zq(qa, qb, qc, dnn, d_factor); return sc_volume_fraction(radius, dnn) * Pq * Zq; const double Zq = sc_Zq(qa, qb, qc, lattice_spacing, lattice_distortion); return sc_volume_fraction(radius, lattice_spacing) * Pq * Zq; }
Note: See TracChangeset for help on using the changeset viewer.