# Changeset 12f77e9 in sasmodels

Ignore:
Timestamp:
Oct 30, 2018 10:56:38 AM (10 months ago)
Branches:
master
Children:
4e96703
Parents:
1657e21 (diff), c6084f1 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into ticket-608-user-defined-weights

Files:
6 added
20 edited

Unmodified
Added
Removed
• ## doc/guide/magnetism/magnetism.rst

 rbefe905 ===========   ================================================================ M0:sld       $D_M M_0$ mtheta:sld   $\theta_M$ mphi:sld     $\phi_M$ up:angle     $\theta_\mathrm{up}$ up:frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up:frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample sld_M0       $D_M M_0$ sld_mtheta   $\theta_M$ sld_mphi     $\phi_M$ up_frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample up_frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample up_angle     $\theta_\mathrm{up}$ ===========   ================================================================ .. note:: The values of the 'up:frac_i' and 'up:frac_f' must be in the range 0 to 1. The values of the 'up_frac_i' and 'up_frac_f' must be in the range 0 to 1. *Document History*
• ## doc/guide/plugin.rst

 r2015f02 def random(): ... This function provides a model-specific random parameter set which shows model features in the USANS to SANS range.  For example, core-shell sphere sets the outer radius of the sphere logarithmically in [20, 20,000], which sets the Q value for the transition from flat to falling.  It then uses a beta distribution to set the percentage of the shape which is shell, giving a preference for very thin or very thick shells (but never 0% or 100%).  Using -sets=10 in sascomp should show a reasonable variety of curves over the default sascomp q range. The parameter set is returned as a dictionary of {parameter: value, ...}. Any model parameters not included in the dictionary will default according to This function provides a model-specific random parameter set which shows model features in the USANS to SANS range.  For example, core-shell sphere sets the outer radius of the sphere logarithmically in [20, 20,000], which sets the Q value for the transition from flat to falling.  It then uses a beta distribution to set the percentage of the shape which is shell, giving a preference for very thin or very thick shells (but never 0% or 100%).  Using -sets=10 in sascomp should show a reasonable variety of curves over the default sascomp q range. The parameter set is returned as a dictionary of {parameter: value, ...}. Any model parameters not included in the dictionary will default according to the code in the _randomize_one() function from sasmodels/compare.py. erf, erfc, tgamma, lgamma:  **do not use** Special functions that should be part of the standard, but are missing or inaccurate on some platforms. Use sas_erf, sas_erfc and sas_gamma instead (see below). Note: lgamma(x) has not yet been tested. or inaccurate on some platforms. Use sas_erf, sas_erfc, sas_gamma and sas_lgamma instead (see below). Some non-standard constants and functions are also provided: Gamma function sas_gamma\ $(x) = \Gamma(x)$. The standard math function, tgamma(x) is unstable for $x < 1$ The standard math function, tgamma(x), is unstable for $x < 1$ on some platforms. :code:source = ["lib/sas_gamma.c", ...] (sas_gamma.c _) sas_gammaln(x): log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. The standard math function, lgamma(x), is incorrect for single precision on some platforms. :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_gammainc(a, x), sas_gammaincc(a, x): Incomplete gamma function sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ and complementary incomplete gamma function sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ :code:source = ["lib/sas_gammainc.c", ...] (sas_gammainc.c _) sas_erf(x), sas_erfc(x): If $n$ = 0 or 1, it uses sas_J0($x$) or sas_J1($x$), respectively. Warning: JN(n,x) can be very inaccurate (0.1%) for x not in [0.1, 100]. The standard math function jn(n, x) is not available on all platforms. Sine integral Si\ $(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. Warning: Si(x) can be very inaccurate (0.1%) for x in [0.1, 100]. This function uses Taylor series for small and large arguments: For large arguments, For large arguments use the following Taylor series, .. math:: - \frac{\sin(x)}{x}\left(\frac{1}{x} - \frac{3!}{x^3} + \frac{5!}{x^5} - \frac{7!}{x^7}\right) For small arguments, For small arguments , .. math::
• ## explore/precision.py

 r2a7e20e neg:    [-100,100] For arbitrary range use "start:stop:steps:scale" where scale is one of log, lin, or linear. *diff* is "relative", "absolute" or "none" linear = not xrange.startswith("log") if xrange == "zoom": lin_min, lin_max, lin_steps = 1000, 1010, 2000 start, stop, steps = 1000, 1010, 2000 elif xrange == "neg": lin_min, lin_max, lin_steps = -100.1, 100.1, 2000 start, stop, steps = -100.1, 100.1, 2000 elif xrange == "linear": lin_min, lin_max, lin_steps = 1, 1000, 2000 lin_min, lin_max, lin_steps = 0.001, 2, 2000 start, stop, steps = 1, 1000, 2000 start, stop, steps = 0.001, 2, 2000 elif xrange == "log": log_min, log_max, log_steps = -3, 5, 400 start, stop, steps = -3, 5, 400 elif xrange == "logq": log_min, log_max, log_steps = -4, 1, 400 start, stop, steps = -4, 1, 400 elif ':' in xrange: parts = xrange.split(':') linear = parts != "log" if len(parts) == 4 else True steps = int(parts) if len(parts) > 2 else 400 start = float(parts) stop = float(parts) else: raise ValueError("unknown range "+xrange) # value to x in the given precision. if linear: lin_min = max(lin_min, self.limits) lin_max = min(lin_max, self.limits) qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='single') #qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='double') start = max(start, self.limits) stop = min(stop, self.limits) qrf = np.linspace(start, stop, steps, dtype='single') #qrf = np.linspace(start, stop, steps, dtype='double') qr = [mp.mpf(float(v)) for v in qrf] #qr = mp.linspace(lin_min, lin_max, lin_steps) #qr = mp.linspace(start, stop, steps) else: log_min = np.log10(max(10**log_min, self.limits)) log_max = np.log10(min(10**log_max, self.limits)) qrf = np.logspace(log_min, log_max, log_steps, dtype='single') #qrf = np.logspace(log_min, log_max, log_steps, dtype='double') start = np.log10(max(10**start, self.limits)) stop = np.log10(min(10**stop, self.limits)) qrf = np.logspace(start, stop, steps, dtype='single') #qrf = np.logspace(start, stop, steps, dtype='double') qr = [mp.mpf(float(v)) for v in qrf] #qr = [10**v for v in mp.linspace(log_min, log_max, log_steps)] #qr = [10**v for v in mp.linspace(start, stop, steps)] target = self.call_mpmath(qr, bits=500) """ if diff == "relative": err = np.array([abs((t-a)/t) for t, a in zip(target, actual)], 'd') err = np.array([(abs((t-a)/t) if t != 0 else a) for t, a in zip(target, actual)], 'd') #err = np.clip(err, 0, 1) pylab.loglog(x, err, '-', label=label) return model_info # Hack to allow second parameter A in two parameter functions A = 1 def parse_extra_pars(): global A A_str = str(A) pop = [] for k, v in enumerate(sys.argv[1:]): if v.startswith("A="): A_str = v[2:] pop.append(k+1) if pop: sys.argv = [v for k, v in enumerate(sys.argv) if k not in pop] A = float(A_str) parse_extra_pars() # =============== FUNCTION DEFINITIONS ================ ocl_function=make_ocl("return sas_gamma(q);", "sas_gamma", ["lib/sas_gamma.c"]), limits=(-3.1, 10), ) add_function( name="gammaln(x)", mp_function=mp.loggamma, np_function=scipy.special.gammaln, ocl_function=make_ocl("return sas_gammaln(q);", "sas_gammaln", ["lib/sas_gammainc.c"]), #ocl_function=make_ocl("return lgamma(q);", "sas_gammaln"), ) add_function( name="gammainc(x)", mp_function=lambda x, a=A: mp.gammainc(a, a=0, b=x)/mp.gamma(a), np_function=lambda x, a=A: scipy.special.gammainc(a, x), ocl_function=make_ocl("return sas_gammainc(%.15g,q);"%A, "sas_gammainc", ["lib/sas_gammainc.c"]), ) add_function( name="gammaincc(x)", mp_function=lambda x, a=A: mp.gammainc(a, a=x, b=mp.inf)/mp.gamma(a), np_function=lambda x, a=A: scipy.special.gammaincc(a, x), ocl_function=make_ocl("return sas_gammaincc(%.15g,q);"%A, "sas_gammaincc", ["lib/sas_gammainc.c"]), ) add_function( lanczos_gamma = """\ const double coeff[] = { 76.18009172947146,     -86.50532032941677, 24.01409824083091,     -1.231739572450155, 76.18009172947146, -86.50532032941677, 24.01409824083091, -1.231739572450155, 0.1208650973866179e-2,-0.5395239384953e-5 }; """ add_function( name="log gamma(x)", name="loggamma(x)", mp_function=mp.loggamma, np_function=scipy.special.gammaln, ALL_FUNCTIONS = set(FUNCTIONS.keys()) ALL_FUNCTIONS.discard("loggamma")  # OCL version not ready yet ALL_FUNCTIONS.discard("loggamma")  # use cephes-based gammaln instead ALL_FUNCTIONS.discard("3j1/x:taylor") ALL_FUNCTIONS.discard("3j1/x:trig") -r indicates that the relative error should be plotted (default), -x indicates the steps in x, where is one of the following log indicates log stepping in [10^-3, 10^5] (default) logq indicates log stepping in [10^-4, 10^1] linear indicates linear stepping in [1, 1000] zoom indicates linear stepping in [1000, 1010] neg indicates linear stepping in [-100.1, 100.1] and name is "all" or one of: log indicates log stepping in [10^-3, 10^5] (default) logq indicates log stepping in [10^-4, 10^1] linear indicates linear stepping in [1, 1000] zoom indicates linear stepping in [1000, 1010] neg indicates linear stepping in [-100.1, 100.1] start:stop:n[:stepping] indicates an n-step plot in [start, stop] or [10^start, 10^stop] if stepping is "log" (default n=400) Some functions (notably gammainc/gammaincc) have an additional parameter A which can be set from the command line as A=value.  Default is A=1. Name is one of: """+names) sys.exit(1)
• ## sasmodels/__init__.py

 re65c3ba defining new models. """ __version__ = "0.97" __version__ = "0.98" def data_files():
• ## sasmodels/compare.py

 r01dba26 # Limit magnetic SLDs to a smaller range, from zero to iron=5/A^2 if par.name.startswith('M0:'): if par.name.endswith('_M0'): return np.random.uniform(0, 5) magnetic_pars = [] for p in parameters.user_parameters(pars, is2d): if any(p.id.startswith(x) for x in ('M0:', 'mtheta:', 'mphi:')): if any(p.id.endswith(x) for x in ('_M0', '_mtheta', '_mphi')): continue if p.id.startswith('up:'): pdtype=pars.get(p.id+"_pd_type", 'gaussian'), relative_pd=p.relative_pd, M0=pars.get('M0:'+p.id, 0.), mphi=pars.get('mphi:'+p.id, 0.), mtheta=pars.get('mtheta:'+p.id, 0.), M0=pars.get(p.id+'_M0', 0.), mphi=pars.get(p.id+'_mphi', 0.), mtheta=pars.get(p.id+'_mtheta', 0.), ) lines.append(_format_par(p.name, **fields)) if suppress: for p in pars: if p.startswith("M0:"): if p.endswith("_M0"): pars[p] = 0 else: first_mag = None for p in pars: if p.startswith("M0:"): if p.endswith("_M0"): any_mag |= (pars[p] != 0) if first_mag is None:
• ## sasmodels/convert.py

 ra69d8cd if version == (3, 1, 2): oldpars = _hand_convert_3_1_2_to_4_1(name, oldpars) if version < (4, 2, 0): oldpars = _rename_magnetic_pars(oldpars) return oldpars def _rename_magnetic_pars(pars): """ Change from M0:par to par_M0, etc. """ keys = list(pars.items()) for k in keys: if k.startswith('M0:'): pars[k[3:]+'_M0'] = pars.pop(k) elif k.startswith('mtheta:'): pars[k[7:]+'_mtheta'] = pars.pop(k) elif k.startswith('mphi:'): pars[k[5:]+'_mphi'] = pars.pop(k) elif k.startswith('up:'): pars['up_'+k[3:]] = pars.pop(k) return pars def _hand_convert_3_1_2_to_4_1(name, oldpars):
• ## sasmodels/custom/__init__.py

 r0f48f1e import sys import os from os.path import basename, splitext from os.path import basename, splitext, join as joinpath, exists, dirname try: from importlib.util import spec_from_file_location, module_from_spec  # type: ignore def load_module_from_path(fullname, path): # type: (str, str) -> "module" """load module from *path* as *fullname*""" spec = spec_from_file_location(fullname, os.path.expanduser(path)) import imp def load_module_from_path(fullname, path): # type: (str, str) -> "module" """load module from *path* as *fullname*""" # Clear out old definitions, if any return module _MODULE_CACHE = {} # type: Dict[str, Tuple("module", int)] _MODULE_DEPENDS = {} # type: Dict[str, List[str]] _MODULE_DEPENDS_STACK = [] # type: List[str] def load_custom_kernel_module(path): # type: str -> "module" """load SAS kernel from *path* as *sasmodels.custom.modelname*""" # Pull off the last .ext if it exists; there may be others name = basename(splitext(path)) # Placing the model in the 'sasmodels.custom' name space. kernel_module = load_module_from_path('sasmodels.custom.'+name, os.path.expanduser(path)) return kernel_module path = os.path.expanduser(path) # Reload module if necessary. if need_reload(path): # Assume the module file is the only dependency _MODULE_DEPENDS[path] = set([path]) # Load the module while pushing it onto the dependency stack.  If # this triggers any submodules, then they will add their dependencies # to this module as the "working_on" parent.  Pop the stack when the # module is loaded. _MODULE_DEPENDS_STACK.append(path) module = load_module_from_path('sasmodels.custom.'+name, path) _MODULE_DEPENDS_STACK.pop() # Include external C code in the dependencies.  We are looking # for module.source and assuming that it is a list of C source files # relative to the module itself.  Any files that do not exist, # such as those in the standard libraries, will be ignored. # TODO: look in builtin module path for standard c sources # TODO: share code with generate.model_sources c_sources = getattr(module, 'source', None) if isinstance(c_sources, (list, tuple)): _MODULE_DEPENDS[path].update(_find_sources(path, c_sources)) # Cache the module, and tag it with the newest timestamp timestamp = max(os.path.getmtime(f) for f in _MODULE_DEPENDS[path]) _MODULE_CACHE[path] = module, timestamp #print("loading", os.path.basename(path), _MODULE_CACHE[path], #    [os.path.basename(p) for p in _MODULE_DEPENDS[path]]) # Add path and all its dependence to the parent module, if there is one. if _MODULE_DEPENDS_STACK: working_on = _MODULE_DEPENDS_STACK[-1] _MODULE_DEPENDS[working_on].update(_MODULE_DEPENDS[path]) return _MODULE_CACHE[path] def need_reload(path): # type: str -> bool """ Return True if any path dependencies have a timestamp newer than the time when the path was most recently loaded. """ # TODO: fails if a dependency has a modification time in the future # If the newest dependency has a time stamp in the future, then this # will be recorded as the cached time.  When a second dependency # is updated to the current time stamp, it will still be considered # older than the current build and the reload will not be triggered. # Could instead treat all future times as 0 here and in the code above # which records the newest timestamp.  This will force a reload when # the future time is reached, but other than that should perform # correctly.  Probably not worth the extra code... _, cache_time = _MODULE_CACHE.get(path, (None, -1)) depends = _MODULE_DEPENDS.get(path, [path]) #print("reload", any(cache_time < os.path.getmtime(p) for p in depends)) #for f in depends: print(">>>  ", f, os.path.getmtime(f)) return any(cache_time < os.path.getmtime(p) for p in depends) def _find_sources(path, source_list): # type: (str, List[str]) -> List[str] """ Return a list of the sources relative to base file; ignore any that are not found. """ root = dirname(path) found = [] for source_name in source_list: source_path = joinpath(root, source_name) if exists(source_path): found.append(source_path) return found
• ## sasmodels/kernelpy.py

 r108e70e self.info = model_info self.dtype = np.dtype('d') logger.info("load python model " + self.info.name) logger.info("make python model " + self.info.name) def make_kernel(self, q_vectors):
• ## sasmodels/model_test.py

 r012cd34 import sys import unittest import traceback try: # pylint: enable=unused-import def make_suite(loaders, models): # type: (List[str], List[str]) -> unittest.TestSuite *models* is the list of models to test, or *["all"]* to test all models. """ ModelTestCase = _hide_model_case_from_nose() suite = unittest.TestSuite() skip = [] for model_name in models: if model_name in skip: continue model_info = load_model_info(model_name) #print('------') #print('found tests in', model_name) #print('------') # if ispy then use the dll loader to call pykernel # don't try to call cl kernel since it will not be # available in some environmentes. is_py = callable(model_info.Iq) # Some OpenCL drivers seem to be flaky, and are not producing the # expected result.  Since we don't have known test values yet for # all of our models, we are instead going to compare the results # for the 'smoke test' (that is, evaluation at q=0.1 for the default # parameters just to see that the model runs to completion) between # the OpenCL and the DLL.  To do this, we define a 'stash' which is # shared between OpenCL and DLL tests.  This is just a list.  If the # list is empty (which it will be when DLL runs, if the DLL runs # first), then the results are appended to the list.  If the list # is not empty (which it will be when OpenCL runs second), the results # are compared to the results stored in the first element of the list. # This is a horrible stateful hack which only makes sense because the # test suite is thrown away after being run once. stash = [] if is_py:  # kernel implemented in python test_name = "%s-python"%model_name test_method_name = "test_%s_python" % model_info.id if model_name not in skip: model_info = load_model_info(model_name) _add_model_to_suite(loaders, suite, model_info) return suite def _add_model_to_suite(loaders, suite, model_info): ModelTestCase = _hide_model_case_from_nose() #print('------') #print('found tests in', model_name) #print('------') # if ispy then use the dll loader to call pykernel # don't try to call cl kernel since it will not be # available in some environmentes. is_py = callable(model_info.Iq) # Some OpenCL drivers seem to be flaky, and are not producing the # expected result.  Since we don't have known test values yet for # all of our models, we are instead going to compare the results # for the 'smoke test' (that is, evaluation at q=0.1 for the default # parameters just to see that the model runs to completion) between # the OpenCL and the DLL.  To do this, we define a 'stash' which is # shared between OpenCL and DLL tests.  This is just a list.  If the # list is empty (which it will be when DLL runs, if the DLL runs # first), then the results are appended to the list.  If the list # is not empty (which it will be when OpenCL runs second), the results # are compared to the results stored in the first element of the list. # This is a horrible stateful hack which only makes sense because the # test suite is thrown away after being run once. stash = [] if is_py:  # kernel implemented in python test_name = "%s-python"%model_info.name test_method_name = "test_%s_python" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll",  # so that dtype="double", stash=stash) suite.addTest(test) else:   # kernel implemented in C # test using dll if desired if 'dll' in loaders or not use_opencl(): test_name = "%s-dll"%model_info.name test_method_name = "test_%s_dll" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll",  # so that dtype="double", stash=stash) test_method_name, platform="dll", dtype="double", stash=stash) suite.addTest(test) else:   # kernel implemented in C # test using dll if desired if 'dll' in loaders or not use_opencl(): test_name = "%s-dll"%model_name test_method_name = "test_%s_dll" % model_info.id test = ModelTestCase(test_name, model_info, test_method_name, platform="dll", dtype="double", stash=stash) suite.addTest(test) # test using opencl if desired and available if 'opencl' in loaders and use_opencl(): test_name = "%s-opencl"%model_name test_method_name = "test_%s_opencl" % model_info.id # Using dtype=None so that the models that are only # correct for double precision are not tested using # single precision.  The choice is determined by the # presence of *single=False* in the model file. test = ModelTestCase(test_name, model_info, test_method_name, platform="ocl", dtype=None, stash=stash) #print("defining", test_name) suite.addTest(test) return suite # test using opencl if desired and available if 'opencl' in loaders and use_opencl(): test_name = "%s-opencl"%model_info.name test_method_name = "test_%s_opencl" % model_info.id # Using dtype=None so that the models that are only # correct for double precision are not tested using # single precision.  The choice is determined by the # presence of *single=False* in the model file. test = ModelTestCase(test_name, model_info, test_method_name, platform="ocl", dtype=None, stash=stash) #print("defining", test_name) suite.addTest(test) def _hide_model_case_from_nose(): return abs(target-actual)/shift < 1.5*10**-digits def run_one(model): # type: (str) -> str """ Run the tests for a single model, printing the results to stdout. *model* can by a python file, which is handy for checking user defined plugin models. # CRUFT: old interface; should be deprecated and removed def run_one(model_name): # msg = "use check_model(model_info) rather than run_one(model_name)" # warnings.warn(msg, category=DeprecationWarning, stacklevel=2) try: model_info = load_model_info(model_name) except Exception: output = traceback.format_exc() return output success, output = check_model(model_info) return output def check_model(model_info): # type: (ModelInfo) -> str """ Run the tests for a single model, capturing the output. Returns success status and the output string. """ # Note that running main() directly did not work from within the # Build a test suite containing just the model loaders = ['opencl'] if use_opencl() else ['dll'] models = [model] try: suite = make_suite(loaders, models) except Exception: import traceback stream.writeln(traceback.format_exc()) return suite = unittest.TestSuite() _add_model_to_suite(loaders, suite, model_info) # Warn if there are no user defined tests. for test in suite: if not test.info.tests: stream.writeln("Note: %s has no user defined tests."%model) stream.writeln("Note: %s has no user defined tests."%model_info.name) break else: output = stream.getvalue() stream.close() return output return result.wasSuccessful(), output
• ## sasmodels/modelinfo.py

 r7b9e4dd self.is_asymmetric = any(p.name == 'psi' for p in self.kernel_parameters) self.magnetism_index = [k for k, p in enumerate(self.call_parameters) if p.id.startswith('M0:')] if p.id.endswith('_M0')] self.pd_1d = set(p.name for p in self.call_parameters if self.nmagnetic > 0: full_list.extend([ Parameter('up:frac_i', '', 0., [0., 1.], Parameter('up_frac_i', '', 0., [0., 1.], 'magnetic', 'fraction of spin up incident'), Parameter('up:frac_f', '', 0., [0., 1.], Parameter('up_frac_f', '', 0., [0., 1.], 'magnetic', 'fraction of spin up final'), Parameter('up:angle', 'degrees', 0., [0., 360.], Parameter('up_angle', 'degrees', 0., [0., 360.], 'magnetic', 'spin up angle'), ]) for p in slds: full_list.extend([ Parameter('M0:'+p.id, '1e-6/Ang^2', 0., [-np.inf, np.inf], Parameter(p.id+'_M0', '1e-6/Ang^2', 0., [-np.inf, np.inf], 'magnetic', 'magnetic amplitude for '+p.description), Parameter('mtheta:'+p.id, 'degrees', 0., [-90., 90.], Parameter(p.id+'_mtheta', 'degrees', 0., [-90., 90.], 'magnetic', 'magnetic latitude for '+p.description), Parameter('mphi:'+p.id, 'degrees', 0., [-180., 180.], Parameter(p.id+'_mphi', 'degrees', 0., [-180., 180.], 'magnetic', 'magnetic longitude for '+p.description), ]) Parameters marked as sld will automatically have a set of associated magnetic parameters (m0:p, mtheta:p, mphi:p), as well as polarization information (up:theta, up:frac_i, up:frac_f). magnetic parameters (p_M0, p_mtheta, p_mphi), as well as polarization information (up_theta, up_frac_i, up_frac_f). """ # control parameters go first result.append(expanded_pars[name]) if is2d: for tag in 'M0:', 'mtheta:', 'mphi:': if tag+name in expanded_pars: result.append(expanded_pars[tag+name]) for tag in '_M0', '_mtheta', '_mphi': if name+tag in expanded_pars: result.append(expanded_pars[name+tag]) # Gather the user parameters in order append_group(p.id) if is2d and 'up:angle' in expanded_pars: if is2d and 'up_angle' in expanded_pars: result.extend([ expanded_pars['up:frac_i'], expanded_pars['up:frac_f'], expanded_pars['up:angle'], expanded_pars['up_frac_i'], expanded_pars['up_frac_f'], expanded_pars['up_angle'], ]) info.structure_factor = getattr(kernel_module, 'structure_factor', False) info.profile_axes = getattr(kernel_module, 'profile_axes', ['x', 'y']) # Note: custom.load_custom_kernel_module assumes the C sources are defined # by this attribute. info.source = getattr(kernel_module, 'source', []) info.c_code = getattr(kernel_module, 'c_code', None) for k in range(control+1, p.length+1) if p.length > 1) for p in self.parameters.kernel_parameters: if p.length > 1 and p.type == "sld": for k in range(control+1, p.length+1): base = p.id+str(k) hidden.update((base+"_M0", base+"_mtheta", base+"_mphi")) return hidden
• ## sasmodels/models/bcc_paracrystal.py

 r2d81cfe r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- I(q) = \frac{\text{scale}}{V_p} V_\text{lattice} P(q) Z(q) where *scale* is the volume fraction of spheres, $V_p$ is the volume of the Authorship and Verification ---------------------------- --------------------------- * **Author:** NIST IGOR/DANSE **Date:** pre 2010
• ## sasmodels/models/be_polyelectrolyte.py

 ref07e95 r""" .. note:: Please read the Validation section below. Definition ---------- I(q) = K\frac{q^2+k^2}{4\pi L_b\alpha ^2} \frac{1}{1+r_{0}^2(q^2+k^2)(q^2-12hC_a/b^2)} + background \frac{1}{1+r_{0}^4(q^2+k^2)(q^2-12hC_a/b^2)} + background k^2 = 4\pi L_b(2C_s + \alpha C_a) r_{0}^2 = \frac{1}{\alpha \sqrt{C_a} \left( b/\sqrt{48\pi L_b}\right)} r_{0}^2 = \frac{b}{\alpha \sqrt{C_a 48\pi L_b}} where $K$ is the contrast factor for the polymer which is defined differently than in other models and is given in barns where $1 barn = 10^{-24} cm^2$.  $K$ is other models and is given in barns where 1 $barn = 10^{-24}$ $cm^2$.  $K$ is defined as: a = b_p - (v_p/v_s) b_s where $b_p$ and $b_s$ are sum of the scattering lengths of the atoms constituting the monomer of the polymer and the sum of the scattering lengths of the atoms constituting the solvent molecules respectively, and $v_p$ and $v_s$ are the partial molar volume of the polymer and the solvent respectively $L_b$ is the Bjerrum length(|Ang|) - **Note:** This parameter needs to be kept constant for a given solvent and temperature! $h$ is the virial parameter (|Ang^3|/mol) - **Note:** See [#Borue]_ for the correct interpretation of this parameter.  It incorporates second and third virial coefficients and can be Negative. $b$ is the monomer length(|Ang|), $C_s$ is the concentration of monovalent salt(mol/L), $\alpha$ is the ionization degree (ionization degree : ratio of charged monomers  to total number of monomers), $C_a$ is the polymer molar concentration(mol/L), and $background$ is the incoherent background. where: - $b_p$ and $b_s$ are **sum of the scattering lengths of the atoms** constituting the polymer monomer and the solvent molecules, respectively. - $v_p$ and $v_s$ are the partial molar volume of the polymer and the solvent, respectively. - $L_b$ is the Bjerrum length (|Ang|) - **Note:** This parameter needs to be kept constant for a given solvent and temperature! - $h$ is the virial parameter (|Ang^3|) - **Note:** See [#Borue]_ for the correct interpretation of this parameter.  It incorporates second and third virial coefficients and can be *negative*. - $b$ is the monomer length (|Ang|). - $C_s$ is the concentration of monovalent salt(1/|Ang^3| - internally converted from mol/L). - $\alpha$ is the degree of ionization (the ratio of charged monomers to the total number of monomers) - $C_a$ is the polymer molar concentration (1/|Ang^3| - internally converted from mol/L) - $background$ is the incoherent background. For 2D data the scattering intensity is calculated in the same way as 1D, q = \sqrt{q_x^2 + q_y^2} Validation ---------- As of the last revision, this code is believed to be correct.  However it needs further validation and should be used with caution at this time.  The history of this code goes back to a 1998 implementation. It was recently noted that in that implementation, while both the polymer concentration and salt concentration were converted from experimental units of mol/L to more dimensionally useful units of 1/|Ang^3|, only the converted version of the polymer concentration was actually being used in the calculation while the unconverted salt concentration (still in apparent units of mol/L) was being used.  This was carried through to Sasmodels as used for SasView 4.1 (though the line of code converting the salt concentration to the new units was removed somewhere along the line). Simple dimensional analysis of the calculation shows that the converted salt concentration should be used, which the original code suggests was the intention, so this has now been corrected (for SasView 4.2). Once better validation has been performed this note will be removed. References * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Paul Kienzle **Date:** July 24, 2016 * **Last Reviewed by:** Paul Butler and Richard Heenan **Date:** October 07, 2016 * **Last Modified by:** Paul Butler **Date:** September 25, 2018 * **Last Reviewed by:** Paul Butler **Date:** September 25, 2018 """ ["contrast_factor",       "barns",   10.0,  [-inf, inf], "", "Contrast factor of the polymer"], ["bjerrum_length",        "Ang",      7.1,  [0, inf],    "", "Bjerrum length"], ["virial_param",          "Ang^3/mol", 12.0,  [-inf, inf], "", "Virial parameter"], ["virial_param",          "Ang^3", 12.0,  [-inf, inf], "", "Virial parameter"], ["monomer_length",        "Ang",     10.0,  [0, inf],    "", "Monomer length"], ["salt_concentration",    "mol/L",    0.0,  [-inf, inf], "", "Concentration of monovalent salt"], def Iq(q, contrast_factor=10.0, bjerrum_length=7.1, virial_param=12.0, monomer_length=10.0, salt_concentration=0.0, ionization_degree=0.05, polymer_concentration=0.7): contrast_factor, bjerrum_length, virial_param, monomer_length, salt_concentration, ionization_degree, polymer_concentration): """ :param q:                     Input q-value :param contrast_factor:       Contrast factor of the polymer :param bjerrum_length:        Bjerrum length :param virial_param:          Virial parameter :param monomer_length:        Monomer length :param salt_concentration:    Concentration of monovalent salt :param ionization_degree:     Degree of ionization :param polymer_concentration: Polymer molar concentration :return:                      1-D intensity :params: see parameter table :return: 1-D form factor for polyelectrolytes in low salt parameter names, units, default values, and behavior (volume, sld etc) are defined in the parameter table.  The concentrations are converted from experimental mol/L to dimensionaly useful 1/A3 in first two lines """ concentration = polymer_concentration * 6.022136e-4 k_square = 4.0 * pi * bjerrum_length * (2*salt_concentration + ionization_degree * concentration) r0_square = 1.0/ionization_degree/sqrt(concentration) * \ concentration_pol = polymer_concentration * 6.022136e-4 concentration_salt = salt_concentration * 6.022136e-4 k_square = 4.0 * pi * bjerrum_length * (2*concentration_salt + ionization_degree * concentration_pol) r0_square = 1.0/ionization_degree/sqrt(concentration_pol) * \ (monomer_length/sqrt((48.0*pi*bjerrum_length))) term2 = 1.0 + r0_square**2 * (q**2 + k_square) * \ (q**2 - (12.0 * virial_param * concentration/(monomer_length**2))) (q**2 - (12.0 * virial_param * concentration_pol/(monomer_length**2))) return term1/term2 # Accuracy tests based on content in test/utest_other_models.py # Note that these should some day be validated beyond this self validation # (circular reasoning). -- i.e. the "good value," at least for those with # non zero salt concentrations, were obtained by running the current # model in SasView and copying the appropriate result here. #    PDB -- Sep 26, 2018 [{'contrast_factor':       10.0, 'bjerrum_length':         7.1, }, 0.001, 0.0948379], # Additional tests with larger range of parameters [{'contrast_factor':       10.0, 'bjerrum_length':       100.0, 'virial_param':           3.0, 'monomer_length':         1.0, 'salt_concentration':    10.0, 'ionization_degree':      2.0, 'polymer_concentration': 10.0, 'monomer_length':         5.0, 'salt_concentration':     1.0, 'ionization_degree':      0.1, 'polymer_concentration':  1.0, 'background':             0.0, }, 0.1, -3.75693800588], }, 0.1, 0.253469484], [{'contrast_factor':       10.0, 'bjerrum_length':       100.0, 'virial_param':           3.0, 'monomer_length':         1.0, 'salt_concentration':    10.0, 'ionization_degree':      2.0, 'polymer_concentration': 10.0, 'background':           100.0 }, 5.0, 100.029142149], 'monomer_length':         5.0, 'salt_concentration':     1.0, 'ionization_degree':      0.1, 'polymer_concentration':  1.0, 'background':             1.0, }, 0.05, 1.738358122], [{'contrast_factor':     100.0, 'bjerrum_length':       10.0, 'virial_param':        180.0, 'monomer_length':        1.0, 'virial_param':         12.0, 'monomer_length':       10.0, 'salt_concentration':    0.1, 'ionization_degree':     0.5, 'polymer_concentration': 0.1, 'background':             0.0, }, 200., 1.80664667511e-06], 'background':           0.01, }, 0.5, 0.012881893], ]
• ## sasmodels/models/fcc_paracrystal.py

 r2d81cfe #note - calculation requires double precision r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- Calculates the scattering from a **face-centered cubic lattice** with paracrystalline distortion. Thermal vibrations are considered to be Paracrystalline distortion is assumed to be isotropic and characterized by a Gaussian distribution. Definition ---------- The scattering intensity $I(q)$ is calculated as is the paracrystalline structure factor for a face-centered cubic structure. Equation (1) of the 1990 reference is used to calculate $Z(q)$, using equations (23)-(25) from the 1987 paper for $Z1$, $Z2$, and $Z3$. Equation (1) of the 1990 reference\ [#CIT1990]_ is used to calculate $Z(q)$, using equations (23)-(25) from the 1987 paper\ [#CIT1987]_ for $Z1$, $Z2$, and $Z3$. The lattice correction (the occupied volume of the lattice) for a ---------- Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 (Original Paper) .. [#CIT1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 (Original Paper) .. [#CIT1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 (Corrections to FCC and BCC lattice structure calculation) Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 (Corrections to FCC and BCC lattice structure calculation) Authorship and Verification --------------------------- * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Paul Butler **Date:** September 29, 2016 * **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016 """
• ## sasmodels/models/sc_paracrystal.py

 r2d81cfe r""" .. warning:: This model and this model description are under review following concerns raised by SasView users. If you need to use this model, please email help@sasview.org for the latest situation. *The SasView Developers. September 2018.* Definition ---------- Calculates the scattering from a **simple cubic lattice** with paracrystalline distortion. Thermal vibrations are considered to be Paracrystalline distortion is assumed to be isotropic and characterized by a Gaussian distribution. Definition ---------- The scattering intensity $I(q)$ is calculated as $Z(q)$ is the paracrystalline structure factor for a simple cubic structure. Equation (16) of the 1987 reference is used to calculate $Z(q)$, using equations (13)-(15) from the 1987 paper for Z1, Z2, and Z3. Equation (16) of the 1987 reference\ [#CIT1987]_ is used to calculate $Z(q)$, using equations (13)-(15) from the 1987 paper\ [#CIT1987]_ for $Z1$, $Z2$, and $Z3$. The lattice correction (the occupied volume of the lattice) for a simple cubic Reference --------- Hideki Matsuoka et. al. *Physical Review B,* 36 (1987) 1754-1765 (Original Paper) Hideki Matsuoka et. al. *Physical Review B,* 41 (1990) 3854 -3856 (Corrections to FCC and BCC lattice structure calculation) .. [#CIT1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 (Original Paper) .. [#CIT1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 (Corrections to FCC and BCC lattice structure calculation) Authorship and Verification --------------------------- * **Author:** NIST IGOR/DANSE **Date:** pre 2010 * **Last Modified by:** Paul Butler **Date:** September 29, 2016 * **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016 """
• ## sasmodels/models/spinodal.py

 r475ff58 where $x=q/q_0$, $q_0$ is the peak position, $I_{max}$ is the intensity at $q_0$ (parameterised as the $scale$ parameter), and $B$ is a flat background. The spinodal wavelength is given by $2\pi/q_0$. background. The spinodal wavelength, $\Lambda$, is given by $2\pi/q_0$. The definition of $I_{max}$ in the literature varies. Hashimoto *et al* (1991) define it as .. math:: I_{max} = \Lambda^3\Delta\rho^2 whereas Meier & Strobl (1987) give .. math:: I_{max} = V_z\Delta\rho^2 where $V_z$ is the volume per monomer unit. The exponent $\gamma$ is equal to $d+1$ for off-critical concentration H. Furukawa. Dynamics-scaling theory for phase-separating unmixing mixtures: Growth rates of droplets and scaling properties of autocorrelation functions. Physica A 123,497 (1984). Growth rates of droplets and scaling properties of autocorrelation functions. Physica A 123, 497 (1984). H. Meier & G. Strobl. Small-Angle X-ray Scattering Study of Spinodal Decomposition in Polystyrene/Poly(styrene-co-bromostyrene) Blends. Macromolecules 20, 649-654 (1987). T. Hashimoto, M. Takenaka & H. Jinnai. Scattering Studies of Self-Assembling Processes of Polymer Blends in Spinodal Decomposition. J. Appl. Cryst. 24, 457-466 (1991). Revision History * **Author:**  Dirk Honecker **Date:** Oct 7, 2016 * **Revised:** Steve King    **Date:** Sep 7, 2018 * **Revised:** Steve King    **Date:** Oct 25, 2018 """
• ## sasmodels/sasview_model.py

 raa25fc7 #: set of defined models (standard and custom) MODELS = {}  # type: Dict[str, SasviewModelType] # TODO: remove unused MODEL_BY_PATH cache once sasview no longer references it #: custom model {path: model} mapping so we can check timestamps MODEL_BY_PATH = {}  # type: Dict[str, SasviewModelType] #: Track modules that we have loaded so we can determine whether the model #: has changed since we last reloaded. _CACHED_MODULE = {}  # type: Dict[str, "module"] def find_model(modelname): Load a custom model given the model path. """ model = MODEL_BY_PATH.get(path, None) if model is not None and model.timestamp == getmtime(path): #logger.info("Model already loaded %s", path) return model #logger.info("Loading model %s", path) # Load the kernel module.  This may already be cached by the loader, so # only requires checking the timestamps of the dependents. kernel_module = custom.load_custom_kernel_module(path) if hasattr(kernel_module, 'Model'): model = kernel_module.Model # Check if the module has changed since we last looked. reloaded = kernel_module != _CACHED_MODULE.get(path, None) _CACHED_MODULE[path] = kernel_module # Turn the module into a model.  We need to do this in even if the # model has already been loaded so that we can determine the model # name and retrieve it from the MODELS cache. model = getattr(kernel_module, 'Model', None) if model is not None: # Old style models do not set the name in the class attributes, so # set it here; this name will be overridden when the object is created model_info = modelinfo.make_model_info(kernel_module) model = make_model_from_info(model_info) model.timestamp = getmtime(path) # If a model name already exists and we are loading a different model, _previous_name, model.name, model.filename) MODELS[model.name] = model MODEL_BY_PATH[path] = model return model # Only update the model if the module has changed if reloaded or model.name not in MODELS: MODELS[model.name] = model return MODELS[model.name] hidden.add('background') self._model_info.parameters.defaults['background'] = 0. # Update the parameter lists to exclude any hidden parameters self.magnetic_params = tuple(pname for pname in self.magnetic_params if pname not in hidden) self.orientation_params = tuple(pname for pname in self.orientation_params if pname not in hidden) self._persistency_dict = {} return value, [value], [1.0] @classmethod def runTests(cls): """ Run any tests built into the model and captures the test output. Returns success flag and output """ from .model_test import check_model return check_model(cls._model_info) def test_cylinder(): # type: () -> float Model = _make_standard_model('sphere') model = Model() model.setParam('M0:sld', 8) model.setParam('sld_M0', 8) q = np.linspace(-0.35, 0.35, 500) qx, qy = np.meshgrid(q, q)
• ## sasmodels/special.py

 rdf69efa The standard math function, tgamma(x) is unstable for $x < 1$ on some platforms. sas_gammaln(x): log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. The standard math function, lgamma(x), is incorrect for single precision on some platforms. sas_gammainc(a, x), sas_gammaincc(a, x): Incomplete gamma function sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ and complementary incomplete gamma function sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ sas_erf(x), sas_erfc(x): from numpy import pi, nan, inf from scipy.special import gamma as sas_gamma from scipy.special import gammaln as sas_gammaln from scipy.special import gammainc as sas_gammainc from scipy.special import gammaincc as sas_gammaincc from scipy.special import erf as sas_erf from scipy.special import erfc as sas_erfc
• ## setup.py

 r1f991d6 return version[1:-1] raise RuntimeError("Could not read version from %s/__init__.py"%package) install_requires = ['numpy', 'scipy'] if sys.platform=='win32' or sys.platform=='cygwin': install_requires.append('tinycc') setup( 'sasmodels': ['*.c', '*.cl'], }, install_requires=[ ], install_requires=install_requires, extras_require={ 'full': ['docutils', 'bumps', 'matplotlib'], 'server': ['bumps'], 'OpenCL': ["pyopencl"], 'Bumps': ["bumps"], 'TinyCC': ["tinycc"], }, build_requires=['setuptools'],
• ## doc/guide/pd/polydispersity.rst

 rd089a00 -------------------------------------------- For some models we can calculate the average intensity for a population of particles that possess size and/or orientational (ie, angular) distributions. In SasView we call the former *polydispersity* but use the parameter *PD* to parameterise both. In other words, the meaning of *PD* in a model depends on For some models we can calculate the average intensity for a population of particles that possess size and/or orientational (ie, angular) distributions. In SasView we call the former *polydispersity* but use the parameter *PD* to parameterise both. In other words, the meaning of *PD* in a model depends on the actual parameter it is being applied too. The resultant intensity is then normalized by the average particle volume such The resultant intensity is then normalized by the average particle volume such that P(q) = \text{scale} \langle F^* F \rangle / V + \text{background} where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an average over the distribution $f(x; \bar x, \sigma)$, giving .. math:: P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} f(x; \bar x, \sigma) F^2(q, x)\, dx + \text{background} Each distribution is characterized by a center value $\bar x$ or $x_\text{med}$, a width parameter $\sigma$ (note this is *not necessarily* the standard deviation, so read the description of the distribution carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The distribution width applied to *volume* (ie, shape-describing) parameters is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. However, the distribution width applied to *orientation* parameters is just $\sigma = \mathrm{PD}$. the standard deviation, so read the description carefully), the number of sigmas $N_\sigma$ to include from the tails of the distribution, and the number of points used to compute the average. The center of the distribution is set by the value of the model parameter. The meaning of a polydispersity parameter *PD* (not to be confused with a molecular weight distributions in polymer science) in a model depends on the type of parameter it is being applied too. The distribution width applied to *volume* (ie, shape-describing) parameters is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. However, the distribution width applied to *orientation* (ie, angle-describing) parameters is just $\sigma = \mathrm{PD}$. $N_\sigma$ determines how far into the tails to evaluate the distribution, Users should note that the averaging computation is very intensive. Applying polydispersion and/or orientational distributions to multiple parameters at the same time, or increasing the number of points in the distribution, will require patience! However, the calculations are generally more robust with polydispersion and/or orientational distributions to multiple parameters at the same time, or increasing the number of points in the distribution, will require patience! However, the calculations are generally more robust with more data points or more angles. *  *Schulz Distribution* *  *Array Distribution* *  *User-defined Distributions* These are all implemented as *number-average* distributions. Additional distributions are under consideration. **Beware: when the Polydispersity & Orientational Distribution panel in SasView is** **This may not be suitable. See Suggested Applications below.** .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace the term 'polydispersity' (see Pure Appl. Chem., (2009), 81(2), 351-353 _ in order to make the terminology describing distributions of chemical properties unambiguous. However, these terms are unrelated to the proportional size distributions and orientational distributions used in .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace the term 'polydispersity' (see Pure Appl. Chem., (2009), 81(2), 351-353 _ in order to make the terminology describing distributions of chemical properties unambiguous. However, these terms are unrelated to the proportional size distributions and orientational distributions used in SasView models. or angular orientations, consider using the Gaussian or Boltzmann distributions. If applying polydispersion to parameters describing angles, use the Uniform distribution. Beware of using distributions that are always positive (eg, the If applying polydispersion to parameters describing angles, use the Uniform distribution. Beware of using distributions that are always positive (eg, the Lognormal) because angles can be negative! The array distribution allows a user-defined distribution to be applied. The array distribution provides a very simple means of implementing a user- defined distribution, but without any fittable parameters. Greater flexibility is conferred by the user-defined distribution. .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ User-defined Distributions ^^^^^^^^^^^^^^^^^^^^^^^^^^ You can also define your own distribution by creating a python file defining a *Distribution* object with a *_weights* method.  The *_weights* method takes *center*, *sigma*, *lb* and *ub* as arguments, and can access *self.npts* and *self.nsigmas* from the distribution.  They are interpreted as follows: * *center* the value of the shape parameter (for size dispersity) or zero if it is an angular dispersity.  This parameter may be fitted. * *sigma* the width of the distribution, which is the polydispersity parameter times the center for size dispersity, or the polydispersity parameter alone for angular dispersity.  This parameter may be fitted. * *lb*, *ub* are the parameter limits (lower & upper bounds) given in the model definition file.  For example, a radius parameter has *lb* equal to zero.  A volume fraction parameter would have *lb* equal to zero and *ub* equal to one. * *self.nsigmas* the distance to go into the tails when evaluating the distribution.  For a two parameter distribution, this value could be co-opted to use for the second parameter, though it will not be available for fitting. * *self.npts* the number of points to use when evaluating the distribution. The user will adjust this to trade calculation time for accuracy, but the distribution code is free to return more or fewer, or use it for the third parameter in a three parameter distribution. As an example, the code following wraps the Laplace distribution from scipy stats:: import numpy as np from scipy.stats import laplace from sasmodels import weights class Dispersion(weights.Dispersion): r""" Laplace distribution .. math:: w(x) = e^{-\sigma |x - \mu|} """ type = "laplace" default = dict(npts=35, width=0, nsigmas=3)  # default values def _weights(self, center, sigma, lb, ub): x = self._linspace(center, sigma, lb, ub) wx = laplace.pdf(x, center, sigma) return x, wx You can plot the weights for a given value and width using the following:: from numpy import inf from matplotlib import pyplot as plt from sasmodels import weights # reload the user-defined weights weights.load_weights() x, wx = weights.get_weights('laplace', n=35, width=0.1, nsigmas=3, value=50, limits=[0, inf], relative=True) # plot the weights plt.interactive(True) plt.plot(x, wx, 'x') The *self.nsigmas* and *self.npts* parameters are normally used to control the accuracy of the distribution integral. The *self._linspace* function uses them to define the *x* values (along with the *center*, *sigma*, *lb*, and *ub* which are passed as parameters).  If you repurpose npts or nsigmas you will need to generate your own *x*.  Be sure to honour the limits *lb* and *ub*, for example to disallow a negative radius or constrain the volume fraction to lie between zero and one. To activate a user-defined distribution, put it in a file such as *distname.py* in the *SAS_WEIGHTS_PATH* folder.  This is defined with an environment variable, defaulting to:: SAS_WEIGHTS_PATH=~/.sasview/weights The weights path is loaded on startup.  To update the distribution definition in a running application you will need to enter the following python commands:: import sasmodels.weights sasmodels.weights.load_weights('path/to/distname.py') .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ Note about DLS polydispersity ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and it should not be assumed that any of the following can be simply equated with Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and it should not be assumed that any of the following can be simply equated with the polydispersity *PD* parameter used in SasView. The dimensionless **Polydispersity Index (PI)** is a measure of the width of the distribution of autocorrelation function decay rates (*not* the distribution of particle sizes itself, though the two are inversely related) and is defined by The dimensionless **Polydispersity Index (PI)** is a measure of the width of the distribution of autocorrelation function decay rates (*not* the distribution of particle sizes itself, though the two are inversely related) and is defined by ISO 22412:2017 as PI = \mu_{2} / \bar \Gamma^2 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the intensity-weighted average value, of the distribution of decay rates. PI = \sigma^2 / 2\bar \Gamma^2 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** to be defined as RP = \sigma / \bar \Gamma = \sqrt{2 \cdot PI} PI values smaller than 0.05 indicate a highly monodisperse system. Values PI values smaller than 0.05 indicate a highly monodisperse system. Values greater than 0.7 indicate significant polydispersity. The **size polydispersity P-parameter** is defined as the relative standard deviation coefficient of variation The **size polydispersity P-parameter** is defined as the relative standard deviation coefficient of variation .. math:: where $\nu$ is the variance of the distribution and $\bar R$ is the mean value of $R$. Here, the product $P \bar R$ is *equal* to the standard value of $R$. Here, the product $P \bar R$ is *equal* to the standard deviation of the Lognormal distribution.
• ## sasmodels/weights.py

 r3d58247 )) SAS_WEIGHTS_PATH = "~/.sasview/weights" def load_weights(pattern=None): # type: (str) -> None """ Load dispersion distributions matching the given glob pattern """ import logging import os import os.path import glob import traceback from .custom import load_custom_kernel_module if pattern is None: path = os.environ.get("SAS_WEIGHTS_PATH", SAS_WEIGHTS_PATH) pattern = os.path.join(path, "*.py") for filename in sorted(glob.glob(os.path.expanduser(pattern))): try: #print("loading weights from", filename) module = load_custom_kernel_module(filename) MODELS[module.Dispersion.type] = module.Dispersion except Exception as exc: logging.error(traceback.format_exc(exc)) def get_weights(disperser, n, width, nsigmas, value, limits, relative):
Note: See TracChangeset for help on using the changeset viewer.