Changeset 12f77e9 in sasmodels


Ignore:
Timestamp:
Oct 30, 2018 10:56:38 AM (5 years ago)
Author:
Paul Kienzle <pkienzle@…>
Branches:
master
Children:
4e96703
Parents:
1657e21 (diff), c6084f1 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into ticket-608-user-defined-weights

Files:
6 added
20 edited

Legend:

Unmodified
Added
Removed
  • doc/guide/magnetism/magnetism.rst

    rbefe905 rdf87acf  
    8989 
    9090===========   ================================================================ 
    91  M0:sld       $D_M M_0$ 
    92  mtheta:sld   $\theta_M$ 
    93  mphi:sld     $\phi_M$ 
    94  up:angle     $\theta_\mathrm{up}$ 
    95  up:frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample 
    96  up:frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample 
     91 sld_M0       $D_M M_0$ 
     92 sld_mtheta   $\theta_M$ 
     93 sld_mphi     $\phi_M$ 
     94 up_frac_i    $u_i$ = (spin up)/(spin up + spin down) *before* the sample 
     95 up_frac_f    $u_f$ = (spin up)/(spin up + spin down) *after* the sample 
     96 up_angle     $\theta_\mathrm{up}$ 
    9797===========   ================================================================ 
    9898 
    9999.. note:: 
    100     The values of the 'up:frac_i' and 'up:frac_f' must be in the range 0 to 1. 
     100    The values of the 'up_frac_i' and 'up_frac_f' must be in the range 0 to 1. 
    101101 
    102102*Document History* 
  • doc/guide/plugin.rst

    r2015f02 r57c609b  
    428428        def random(): 
    429429        ... 
    430          
    431 This function provides a model-specific random parameter set which shows model  
    432 features in the USANS to SANS range.  For example, core-shell sphere sets the  
    433 outer radius of the sphere logarithmically in `[20, 20,000]`, which sets the Q  
    434 value for the transition from flat to falling.  It then uses a beta distribution  
    435 to set the percentage of the shape which is shell, giving a preference for very  
    436 thin or very thick shells (but never 0% or 100%).  Using `-sets=10` in sascomp  
    437 should show a reasonable variety of curves over the default sascomp q range.   
    438 The parameter set is returned as a dictionary of `{parameter: value, ...}`.   
    439 Any model parameters not included in the dictionary will default according to  
     430 
     431This function provides a model-specific random parameter set which shows model 
     432features in the USANS to SANS range.  For example, core-shell sphere sets the 
     433outer radius of the sphere logarithmically in `[20, 20,000]`, which sets the Q 
     434value for the transition from flat to falling.  It then uses a beta distribution 
     435to set the percentage of the shape which is shell, giving a preference for very 
     436thin or very thick shells (but never 0% or 100%).  Using `-sets=10` in sascomp 
     437should show a reasonable variety of curves over the default sascomp q range. 
     438The parameter set is returned as a dictionary of `{parameter: value, ...}`. 
     439Any model parameters not included in the dictionary will default according to 
    440440the code in the `_randomize_one()` function from sasmodels/compare.py. 
    441441 
     
    701701    erf, erfc, tgamma, lgamma:  **do not use** 
    702702        Special functions that should be part of the standard, but are missing 
    703         or inaccurate on some platforms. Use sas_erf, sas_erfc and sas_gamma 
    704         instead (see below). Note: lgamma(x) has not yet been tested. 
     703        or inaccurate on some platforms. Use sas_erf, sas_erfc, sas_gamma 
     704        and sas_lgamma instead (see below). 
    705705 
    706706Some non-standard constants and functions are also provided: 
     
    769769        Gamma function sas_gamma\ $(x) = \Gamma(x)$. 
    770770 
    771         The standard math function, tgamma(x) is unstable for $x < 1$ 
     771        The standard math function, tgamma(x), is unstable for $x < 1$ 
    772772        on some platforms. 
    773773 
    774774        :code:`source = ["lib/sas_gamma.c", ...]` 
    775775        (`sas_gamma.c <https://github.com/SasView/sasmodels/tree/master/sasmodels/models/lib/sas_gamma.c>`_) 
     776 
     777    sas_gammaln(x): 
     778        log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. 
     779 
     780        The standard math function, lgamma(x), is incorrect for single 
     781        precision on some platforms. 
     782 
     783        :code:`source = ["lib/sas_gammainc.c", ...]` 
     784        (`sas_gammainc.c <https://github.com/SasView/sasmodels/tree/master/sasmodels/models/lib/sas_gammainc.c>`_) 
     785 
     786    sas_gammainc(a, x), sas_gammaincc(a, x): 
     787        Incomplete gamma function 
     788        sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ 
     789        and complementary incomplete gamma function 
     790        sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ 
     791 
     792        :code:`source = ["lib/sas_gammainc.c", ...]` 
     793        (`sas_gammainc.c <https://github.com/SasView/sasmodels/tree/master/sasmodels/models/lib/sas_gammainc.c>`_) 
    776794 
    777795    sas_erf(x), sas_erfc(x): 
     
    811829        If $n$ = 0 or 1, it uses sas_J0($x$) or sas_J1($x$), respectively. 
    812830 
     831        Warning: JN(n,x) can be very inaccurate (0.1%) for x not in [0.1, 100]. 
     832 
    813833        The standard math function jn(n, x) is not available on all platforms. 
    814834 
     
    819839        Sine integral Si\ $(x) = \int_0^x \tfrac{\sin t}{t}\,dt$. 
    820840 
     841        Warning: Si(x) can be very inaccurate (0.1%) for x in [0.1, 100]. 
     842 
    821843        This function uses Taylor series for small and large arguments: 
    822844 
    823         For large arguments, 
     845        For large arguments use the following Taylor series, 
    824846 
    825847        .. math:: 
     
    829851             - \frac{\sin(x)}{x}\left(\frac{1}{x} - \frac{3!}{x^3} + \frac{5!}{x^5} - \frac{7!}{x^7}\right) 
    830852 
    831         For small arguments, 
     853        For small arguments , 
    832854 
    833855        .. math:: 
  • explore/precision.py

    r2a7e20e rfba9ca0  
    9595            neg:    [-100,100] 
    9696 
     97        For arbitrary range use "start:stop:steps:scale" where scale is 
     98        one of log, lin, or linear. 
     99 
    97100        *diff* is "relative", "absolute" or "none" 
    98101 
     
    102105        linear = not xrange.startswith("log") 
    103106        if xrange == "zoom": 
    104             lin_min, lin_max, lin_steps = 1000, 1010, 2000 
     107            start, stop, steps = 1000, 1010, 2000 
    105108        elif xrange == "neg": 
    106             lin_min, lin_max, lin_steps = -100.1, 100.1, 2000 
     109            start, stop, steps = -100.1, 100.1, 2000 
    107110        elif xrange == "linear": 
    108             lin_min, lin_max, lin_steps = 1, 1000, 2000 
    109             lin_min, lin_max, lin_steps = 0.001, 2, 2000 
     111            start, stop, steps = 1, 1000, 2000 
     112            start, stop, steps = 0.001, 2, 2000 
    110113        elif xrange == "log": 
    111             log_min, log_max, log_steps = -3, 5, 400 
     114            start, stop, steps = -3, 5, 400 
    112115        elif xrange == "logq": 
    113             log_min, log_max, log_steps = -4, 1, 400 
     116            start, stop, steps = -4, 1, 400 
     117        elif ':' in xrange: 
     118            parts = xrange.split(':') 
     119            linear = parts[3] != "log" if len(parts) == 4 else True 
     120            steps = int(parts[2]) if len(parts) > 2 else 400 
     121            start = float(parts[0]) 
     122            stop = float(parts[1]) 
     123 
    114124        else: 
    115125            raise ValueError("unknown range "+xrange) 
     
    121131            # value to x in the given precision. 
    122132            if linear: 
    123                 lin_min = max(lin_min, self.limits[0]) 
    124                 lin_max = min(lin_max, self.limits[1]) 
    125                 qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='single') 
    126                 #qrf = np.linspace(lin_min, lin_max, lin_steps, dtype='double') 
     133                start = max(start, self.limits[0]) 
     134                stop = min(stop, self.limits[1]) 
     135                qrf = np.linspace(start, stop, steps, dtype='single') 
     136                #qrf = np.linspace(start, stop, steps, dtype='double') 
    127137                qr = [mp.mpf(float(v)) for v in qrf] 
    128                 #qr = mp.linspace(lin_min, lin_max, lin_steps) 
     138                #qr = mp.linspace(start, stop, steps) 
    129139            else: 
    130                 log_min = np.log10(max(10**log_min, self.limits[0])) 
    131                 log_max = np.log10(min(10**log_max, self.limits[1])) 
    132                 qrf = np.logspace(log_min, log_max, log_steps, dtype='single') 
    133                 #qrf = np.logspace(log_min, log_max, log_steps, dtype='double') 
     140                start = np.log10(max(10**start, self.limits[0])) 
     141                stop = np.log10(min(10**stop, self.limits[1])) 
     142                qrf = np.logspace(start, stop, steps, dtype='single') 
     143                #qrf = np.logspace(start, stop, steps, dtype='double') 
    134144                qr = [mp.mpf(float(v)) for v in qrf] 
    135                 #qr = [10**v for v in mp.linspace(log_min, log_max, log_steps)] 
     145                #qr = [10**v for v in mp.linspace(start, stop, steps)] 
    136146 
    137147        target = self.call_mpmath(qr, bits=500) 
     
    176186    """ 
    177187    if diff == "relative": 
    178         err = np.array([abs((t-a)/t) for t, a in zip(target, actual)], 'd') 
     188        err = np.array([(abs((t-a)/t) if t != 0 else a) for t, a in zip(target, actual)], 'd') 
    179189        #err = np.clip(err, 0, 1) 
    180190        pylab.loglog(x, err, '-', label=label) 
     
    197207    return model_info 
    198208 
     209# Hack to allow second parameter A in two parameter functions 
     210A = 1 
     211def parse_extra_pars(): 
     212    global A 
     213 
     214    A_str = str(A) 
     215    pop = [] 
     216    for k, v in enumerate(sys.argv[1:]): 
     217        if v.startswith("A="): 
     218            A_str = v[2:] 
     219            pop.append(k+1) 
     220    if pop: 
     221        sys.argv = [v for k, v in enumerate(sys.argv) if k not in pop] 
     222        A = float(A_str) 
     223 
     224parse_extra_pars() 
     225 
    199226 
    200227# =============== FUNCTION DEFINITIONS ================ 
     
    297324    ocl_function=make_ocl("return sas_gamma(q);", "sas_gamma", ["lib/sas_gamma.c"]), 
    298325    limits=(-3.1, 10), 
     326) 
     327add_function( 
     328    name="gammaln(x)", 
     329    mp_function=mp.loggamma, 
     330    np_function=scipy.special.gammaln, 
     331    ocl_function=make_ocl("return sas_gammaln(q);", "sas_gammaln", ["lib/sas_gammainc.c"]), 
     332    #ocl_function=make_ocl("return lgamma(q);", "sas_gammaln"), 
     333) 
     334add_function( 
     335    name="gammainc(x)", 
     336    mp_function=lambda x, a=A: mp.gammainc(a, a=0, b=x)/mp.gamma(a), 
     337    np_function=lambda x, a=A: scipy.special.gammainc(a, x), 
     338    ocl_function=make_ocl("return sas_gammainc(%.15g,q);"%A, "sas_gammainc", ["lib/sas_gammainc.c"]), 
     339) 
     340add_function( 
     341    name="gammaincc(x)", 
     342    mp_function=lambda x, a=A: mp.gammainc(a, a=x, b=mp.inf)/mp.gamma(a), 
     343    np_function=lambda x, a=A: scipy.special.gammaincc(a, x), 
     344    ocl_function=make_ocl("return sas_gammaincc(%.15g,q);"%A, "sas_gammaincc", ["lib/sas_gammainc.c"]), 
    299345) 
    300346add_function( 
     
    463509lanczos_gamma = """\ 
    464510    const double coeff[] = { 
    465             76.18009172947146,     -86.50532032941677, 
    466             24.01409824083091,     -1.231739572450155, 
     511            76.18009172947146, -86.50532032941677, 
     512            24.01409824083091, -1.231739572450155, 
    467513            0.1208650973866179e-2,-0.5395239384953e-5 
    468514            }; 
     
    475521""" 
    476522add_function( 
    477     name="log gamma(x)", 
     523    name="loggamma(x)", 
    478524    mp_function=mp.loggamma, 
    479525    np_function=scipy.special.gammaln, 
     
    599645 
    600646ALL_FUNCTIONS = set(FUNCTIONS.keys()) 
    601 ALL_FUNCTIONS.discard("loggamma")  # OCL version not ready yet 
     647ALL_FUNCTIONS.discard("loggamma")  # use cephes-based gammaln instead 
    602648ALL_FUNCTIONS.discard("3j1/x:taylor") 
    603649ALL_FUNCTIONS.discard("3j1/x:trig") 
     
    615661    -r indicates that the relative error should be plotted (default), 
    616662    -x<range> indicates the steps in x, where <range> is one of the following 
    617       log indicates log stepping in [10^-3, 10^5] (default) 
    618       logq indicates log stepping in [10^-4, 10^1] 
    619       linear indicates linear stepping in [1, 1000] 
    620       zoom indicates linear stepping in [1000, 1010] 
    621       neg indicates linear stepping in [-100.1, 100.1] 
    622 and name is "all" or one of: 
     663        log indicates log stepping in [10^-3, 10^5] (default) 
     664        logq indicates log stepping in [10^-4, 10^1] 
     665        linear indicates linear stepping in [1, 1000] 
     666        zoom indicates linear stepping in [1000, 1010] 
     667        neg indicates linear stepping in [-100.1, 100.1] 
     668        start:stop:n[:stepping] indicates an n-step plot in [start, stop] 
     669            or [10^start, 10^stop] if stepping is "log" (default n=400) 
     670Some functions (notably gammainc/gammaincc) have an additional parameter A 
     671which can be set from the command line as A=value.  Default is A=1. 
     672 
     673Name is one of: 
    623674    """+names) 
    624675    sys.exit(1) 
  • sasmodels/__init__.py

    re65c3ba ra1ec908  
    1414defining new models. 
    1515""" 
    16 __version__ = "0.97" 
     16__version__ = "0.98" 
    1717 
    1818def data_files(): 
  • sasmodels/compare.py

    r01dba26 r12f77e9  
    369369 
    370370    # Limit magnetic SLDs to a smaller range, from zero to iron=5/A^2 
    371     if par.name.startswith('M0:'): 
     371    if par.name.endswith('_M0'): 
    372372        return np.random.uniform(0, 5) 
    373373 
     
    539539    magnetic_pars = [] 
    540540    for p in parameters.user_parameters(pars, is2d): 
    541         if any(p.id.startswith(x) for x in ('M0:', 'mtheta:', 'mphi:')): 
     541        if any(p.id.endswith(x) for x in ('_M0', '_mtheta', '_mphi')): 
    542542            continue 
    543543        if p.id.startswith('up:'): 
     
    551551            pdtype=pars.get(p.id+"_pd_type", 'gaussian'), 
    552552            relative_pd=p.relative_pd, 
    553             M0=pars.get('M0:'+p.id, 0.), 
    554             mphi=pars.get('mphi:'+p.id, 0.), 
    555             mtheta=pars.get('mtheta:'+p.id, 0.), 
     553            M0=pars.get(p.id+'_M0', 0.), 
     554            mphi=pars.get(p.id+'_mphi', 0.), 
     555            mtheta=pars.get(p.id+'_mtheta', 0.), 
    556556        ) 
    557557        lines.append(_format_par(p.name, **fields)) 
     
    619619    if suppress: 
    620620        for p in pars: 
    621             if p.startswith("M0:"): 
     621            if p.endswith("_M0"): 
    622622                pars[p] = 0 
    623623    else: 
     
    625625        first_mag = None 
    626626        for p in pars: 
    627             if p.startswith("M0:"): 
     627            if p.endswith("_M0"): 
    628628                any_mag |= (pars[p] != 0) 
    629629                if first_mag is None: 
  • sasmodels/convert.py

    ra69d8cd r610ef23  
    165165    if version == (3, 1, 2): 
    166166        oldpars = _hand_convert_3_1_2_to_4_1(name, oldpars) 
     167    if version < (4, 2, 0): 
     168        oldpars = _rename_magnetic_pars(oldpars) 
    167169    return oldpars 
     170 
     171def _rename_magnetic_pars(pars): 
     172    """ 
     173    Change from M0:par to par_M0, etc. 
     174    """ 
     175    keys = list(pars.items()) 
     176    for k in keys: 
     177        if k.startswith('M0:'): 
     178            pars[k[3:]+'_M0'] = pars.pop(k) 
     179        elif k.startswith('mtheta:'): 
     180            pars[k[7:]+'_mtheta'] = pars.pop(k) 
     181        elif k.startswith('mphi:'): 
     182            pars[k[5:]+'_mphi'] = pars.pop(k) 
     183        elif k.startswith('up:'): 
     184            pars['up_'+k[3:]] = pars.pop(k) 
     185    return pars 
    168186 
    169187def _hand_convert_3_1_2_to_4_1(name, oldpars): 
  • sasmodels/custom/__init__.py

    r0f48f1e rd321747  
    1212import sys 
    1313import os 
    14 from os.path import basename, splitext 
     14from os.path import basename, splitext, join as joinpath, exists, dirname 
    1515 
    1616try: 
     
    1818    from importlib.util import spec_from_file_location, module_from_spec  # type: ignore 
    1919    def load_module_from_path(fullname, path): 
     20        # type: (str, str) -> "module" 
    2021        """load module from *path* as *fullname*""" 
    2122        spec = spec_from_file_location(fullname, os.path.expanduser(path)) 
     
    2728    import imp 
    2829    def load_module_from_path(fullname, path): 
     30        # type: (str, str) -> "module" 
    2931        """load module from *path* as *fullname*""" 
    3032        # Clear out old definitions, if any 
     
    3739        return module 
    3840 
     41_MODULE_CACHE = {} # type: Dict[str, Tuple("module", int)] 
     42_MODULE_DEPENDS = {} # type: Dict[str, List[str]] 
     43_MODULE_DEPENDS_STACK = [] # type: List[str] 
    3944def load_custom_kernel_module(path): 
     45    # type: str -> "module" 
    4046    """load SAS kernel from *path* as *sasmodels.custom.modelname*""" 
    4147    # Pull off the last .ext if it exists; there may be others 
    4248    name = basename(splitext(path)[0]) 
    43     # Placing the model in the 'sasmodels.custom' name space. 
    44     kernel_module = load_module_from_path('sasmodels.custom.'+name, 
    45                                           os.path.expanduser(path)) 
    46     return kernel_module 
     49    path = os.path.expanduser(path) 
     50 
     51    # Reload module if necessary. 
     52    if need_reload(path): 
     53        # Assume the module file is the only dependency 
     54        _MODULE_DEPENDS[path] = set([path]) 
     55 
     56        # Load the module while pushing it onto the dependency stack.  If 
     57        # this triggers any submodules, then they will add their dependencies 
     58        # to this module as the "working_on" parent.  Pop the stack when the 
     59        # module is loaded. 
     60        _MODULE_DEPENDS_STACK.append(path) 
     61        module = load_module_from_path('sasmodels.custom.'+name, path) 
     62        _MODULE_DEPENDS_STACK.pop() 
     63 
     64        # Include external C code in the dependencies.  We are looking 
     65        # for module.source and assuming that it is a list of C source files 
     66        # relative to the module itself.  Any files that do not exist, 
     67        # such as those in the standard libraries, will be ignored. 
     68        # TODO: look in builtin module path for standard c sources 
     69        # TODO: share code with generate.model_sources 
     70        c_sources = getattr(module, 'source', None) 
     71        if isinstance(c_sources, (list, tuple)): 
     72            _MODULE_DEPENDS[path].update(_find_sources(path, c_sources)) 
     73 
     74        # Cache the module, and tag it with the newest timestamp 
     75        timestamp = max(os.path.getmtime(f) for f in _MODULE_DEPENDS[path]) 
     76        _MODULE_CACHE[path] = module, timestamp 
     77 
     78        #print("loading", os.path.basename(path), _MODULE_CACHE[path][1], 
     79        #    [os.path.basename(p) for p in _MODULE_DEPENDS[path]]) 
     80 
     81    # Add path and all its dependence to the parent module, if there is one. 
     82    if _MODULE_DEPENDS_STACK: 
     83        working_on = _MODULE_DEPENDS_STACK[-1] 
     84        _MODULE_DEPENDS[working_on].update(_MODULE_DEPENDS[path]) 
     85 
     86    return _MODULE_CACHE[path][0] 
     87 
     88def need_reload(path): 
     89    # type: str -> bool 
     90    """ 
     91    Return True if any path dependencies have a timestamp newer than the time 
     92    when the path was most recently loaded. 
     93    """ 
     94    # TODO: fails if a dependency has a modification time in the future 
     95    # If the newest dependency has a time stamp in the future, then this 
     96    # will be recorded as the cached time.  When a second dependency 
     97    # is updated to the current time stamp, it will still be considered 
     98    # older than the current build and the reload will not be triggered. 
     99    # Could instead treat all future times as 0 here and in the code above 
     100    # which records the newest timestamp.  This will force a reload when 
     101    # the future time is reached, but other than that should perform 
     102    # correctly.  Probably not worth the extra code... 
     103    _, cache_time = _MODULE_CACHE.get(path, (None, -1)) 
     104    depends = _MODULE_DEPENDS.get(path, [path]) 
     105    #print("reload", any(cache_time < os.path.getmtime(p) for p in depends)) 
     106    #for f in depends: print(">>>  ", f, os.path.getmtime(f)) 
     107    return any(cache_time < os.path.getmtime(p) for p in depends) 
     108 
     109def _find_sources(path, source_list): 
     110    # type: (str, List[str]) -> List[str] 
     111    """ 
     112    Return a list of the sources relative to base file; ignore any that 
     113    are not found. 
     114    """ 
     115    root = dirname(path) 
     116    found = [] 
     117    for source_name in source_list: 
     118        source_path = joinpath(root, source_name) 
     119        if exists(source_path): 
     120            found.append(source_path) 
     121    return found 
  • sasmodels/kernelpy.py

    r108e70e r12eec1e  
    3737        self.info = model_info 
    3838        self.dtype = np.dtype('d') 
    39         logger.info("load python model " + self.info.name) 
     39        logger.info("make python model " + self.info.name) 
    4040 
    4141    def make_kernel(self, q_vectors): 
  • sasmodels/model_test.py

    r012cd34 r12eec1e  
    4747import sys 
    4848import unittest 
     49import traceback 
    4950 
    5051try: 
     
    7475# pylint: enable=unused-import 
    7576 
    76  
    7777def make_suite(loaders, models): 
    7878    # type: (List[str], List[str]) -> unittest.TestSuite 
     
    8686    *models* is the list of models to test, or *["all"]* to test all models. 
    8787    """ 
    88     ModelTestCase = _hide_model_case_from_nose() 
    8988    suite = unittest.TestSuite() 
    9089 
     
    9594        skip = [] 
    9695    for model_name in models: 
    97         if model_name in skip: 
    98             continue 
    99         model_info = load_model_info(model_name) 
    100  
    101         #print('------') 
    102         #print('found tests in', model_name) 
    103         #print('------') 
    104  
    105         # if ispy then use the dll loader to call pykernel 
    106         # don't try to call cl kernel since it will not be 
    107         # available in some environmentes. 
    108         is_py = callable(model_info.Iq) 
    109  
    110         # Some OpenCL drivers seem to be flaky, and are not producing the 
    111         # expected result.  Since we don't have known test values yet for 
    112         # all of our models, we are instead going to compare the results 
    113         # for the 'smoke test' (that is, evaluation at q=0.1 for the default 
    114         # parameters just to see that the model runs to completion) between 
    115         # the OpenCL and the DLL.  To do this, we define a 'stash' which is 
    116         # shared between OpenCL and DLL tests.  This is just a list.  If the 
    117         # list is empty (which it will be when DLL runs, if the DLL runs 
    118         # first), then the results are appended to the list.  If the list 
    119         # is not empty (which it will be when OpenCL runs second), the results 
    120         # are compared to the results stored in the first element of the list. 
    121         # This is a horrible stateful hack which only makes sense because the 
    122         # test suite is thrown away after being run once. 
    123         stash = [] 
    124  
    125         if is_py:  # kernel implemented in python 
    126             test_name = "%s-python"%model_name 
    127             test_method_name = "test_%s_python" % model_info.id 
     96        if model_name not in skip: 
     97            model_info = load_model_info(model_name) 
     98            _add_model_to_suite(loaders, suite, model_info) 
     99 
     100    return suite 
     101 
     102def _add_model_to_suite(loaders, suite, model_info): 
     103    ModelTestCase = _hide_model_case_from_nose() 
     104 
     105    #print('------') 
     106    #print('found tests in', model_name) 
     107    #print('------') 
     108 
     109    # if ispy then use the dll loader to call pykernel 
     110    # don't try to call cl kernel since it will not be 
     111    # available in some environmentes. 
     112    is_py = callable(model_info.Iq) 
     113 
     114    # Some OpenCL drivers seem to be flaky, and are not producing the 
     115    # expected result.  Since we don't have known test values yet for 
     116    # all of our models, we are instead going to compare the results 
     117    # for the 'smoke test' (that is, evaluation at q=0.1 for the default 
     118    # parameters just to see that the model runs to completion) between 
     119    # the OpenCL and the DLL.  To do this, we define a 'stash' which is 
     120    # shared between OpenCL and DLL tests.  This is just a list.  If the 
     121    # list is empty (which it will be when DLL runs, if the DLL runs 
     122    # first), then the results are appended to the list.  If the list 
     123    # is not empty (which it will be when OpenCL runs second), the results 
     124    # are compared to the results stored in the first element of the list. 
     125    # This is a horrible stateful hack which only makes sense because the 
     126    # test suite is thrown away after being run once. 
     127    stash = [] 
     128 
     129    if is_py:  # kernel implemented in python 
     130        test_name = "%s-python"%model_info.name 
     131        test_method_name = "test_%s_python" % model_info.id 
     132        test = ModelTestCase(test_name, model_info, 
     133                                test_method_name, 
     134                                platform="dll",  # so that 
     135                                dtype="double", 
     136                                stash=stash) 
     137        suite.addTest(test) 
     138    else:   # kernel implemented in C 
     139 
     140        # test using dll if desired 
     141        if 'dll' in loaders or not use_opencl(): 
     142            test_name = "%s-dll"%model_info.name 
     143            test_method_name = "test_%s_dll" % model_info.id 
    128144            test = ModelTestCase(test_name, model_info, 
    129                                  test_method_name, 
    130                                  platform="dll",  # so that 
    131                                  dtype="double", 
    132                                  stash=stash) 
     145                                    test_method_name, 
     146                                    platform="dll", 
     147                                    dtype="double", 
     148                                    stash=stash) 
    133149            suite.addTest(test) 
    134         else:   # kernel implemented in C 
    135  
    136             # test using dll if desired 
    137             if 'dll' in loaders or not use_opencl(): 
    138                 test_name = "%s-dll"%model_name 
    139                 test_method_name = "test_%s_dll" % model_info.id 
    140                 test = ModelTestCase(test_name, model_info, 
    141                                      test_method_name, 
    142                                      platform="dll", 
    143                                      dtype="double", 
    144                                      stash=stash) 
    145                 suite.addTest(test) 
    146  
    147             # test using opencl if desired and available 
    148             if 'opencl' in loaders and use_opencl(): 
    149                 test_name = "%s-opencl"%model_name 
    150                 test_method_name = "test_%s_opencl" % model_info.id 
    151                 # Using dtype=None so that the models that are only 
    152                 # correct for double precision are not tested using 
    153                 # single precision.  The choice is determined by the 
    154                 # presence of *single=False* in the model file. 
    155                 test = ModelTestCase(test_name, model_info, 
    156                                      test_method_name, 
    157                                      platform="ocl", dtype=None, 
    158                                      stash=stash) 
    159                 #print("defining", test_name) 
    160                 suite.addTest(test) 
    161  
    162     return suite 
     150 
     151        # test using opencl if desired and available 
     152        if 'opencl' in loaders and use_opencl(): 
     153            test_name = "%s-opencl"%model_info.name 
     154            test_method_name = "test_%s_opencl" % model_info.id 
     155            # Using dtype=None so that the models that are only 
     156            # correct for double precision are not tested using 
     157            # single precision.  The choice is determined by the 
     158            # presence of *single=False* in the model file. 
     159            test = ModelTestCase(test_name, model_info, 
     160                                    test_method_name, 
     161                                    platform="ocl", dtype=None, 
     162                                    stash=stash) 
     163            #print("defining", test_name) 
     164            suite.addTest(test) 
     165 
    163166 
    164167def _hide_model_case_from_nose(): 
     
    348351    return abs(target-actual)/shift < 1.5*10**-digits 
    349352 
    350 def run_one(model): 
    351     # type: (str) -> str 
    352     """ 
    353     Run the tests for a single model, printing the results to stdout. 
    354  
    355     *model* can by a python file, which is handy for checking user defined 
    356     plugin models. 
     353# CRUFT: old interface; should be deprecated and removed 
     354def run_one(model_name): 
     355    # msg = "use check_model(model_info) rather than run_one(model_name)" 
     356    # warnings.warn(msg, category=DeprecationWarning, stacklevel=2) 
     357    try: 
     358        model_info = load_model_info(model_name) 
     359    except Exception: 
     360        output = traceback.format_exc() 
     361        return output 
     362 
     363    success, output = check_model(model_info) 
     364    return output 
     365 
     366def check_model(model_info): 
     367    # type: (ModelInfo) -> str 
     368    """ 
     369    Run the tests for a single model, capturing the output. 
     370 
     371    Returns success status and the output string. 
    357372    """ 
    358373    # Note that running main() directly did not work from within the 
     
    369384    # Build a test suite containing just the model 
    370385    loaders = ['opencl'] if use_opencl() else ['dll'] 
    371     models = [model] 
    372     try: 
    373         suite = make_suite(loaders, models) 
    374     except Exception: 
    375         import traceback 
    376         stream.writeln(traceback.format_exc()) 
    377         return 
     386    suite = unittest.TestSuite() 
     387    _add_model_to_suite(loaders, suite, model_info) 
    378388 
    379389    # Warn if there are no user defined tests. 
     
    390400    for test in suite: 
    391401        if not test.info.tests: 
    392             stream.writeln("Note: %s has no user defined tests."%model) 
     402            stream.writeln("Note: %s has no user defined tests."%model_info.name) 
    393403        break 
    394404    else: 
     
    406416    output = stream.getvalue() 
    407417    stream.close() 
    408     return output 
     418    return result.wasSuccessful(), output 
    409419 
    410420 
  • sasmodels/modelinfo.py

    r7b9e4dd rbd547d0  
    466466        self.is_asymmetric = any(p.name == 'psi' for p in self.kernel_parameters) 
    467467        self.magnetism_index = [k for k, p in enumerate(self.call_parameters) 
    468                                 if p.id.startswith('M0:')] 
     468                                if p.id.endswith('_M0')] 
    469469 
    470470        self.pd_1d = set(p.name for p in self.call_parameters 
     
    586586        if self.nmagnetic > 0: 
    587587            full_list.extend([ 
    588                 Parameter('up:frac_i', '', 0., [0., 1.], 
     588                Parameter('up_frac_i', '', 0., [0., 1.], 
    589589                          'magnetic', 'fraction of spin up incident'), 
    590                 Parameter('up:frac_f', '', 0., [0., 1.], 
     590                Parameter('up_frac_f', '', 0., [0., 1.], 
    591591                          'magnetic', 'fraction of spin up final'), 
    592                 Parameter('up:angle', 'degrees', 0., [0., 360.], 
     592                Parameter('up_angle', 'degrees', 0., [0., 360.], 
    593593                          'magnetic', 'spin up angle'), 
    594594            ]) 
     
    596596            for p in slds: 
    597597                full_list.extend([ 
    598                     Parameter('M0:'+p.id, '1e-6/Ang^2', 0., [-np.inf, np.inf], 
     598                    Parameter(p.id+'_M0', '1e-6/Ang^2', 0., [-np.inf, np.inf], 
    599599                              'magnetic', 'magnetic amplitude for '+p.description), 
    600                     Parameter('mtheta:'+p.id, 'degrees', 0., [-90., 90.], 
     600                    Parameter(p.id+'_mtheta', 'degrees', 0., [-90., 90.], 
    601601                              'magnetic', 'magnetic latitude for '+p.description), 
    602                     Parameter('mphi:'+p.id, 'degrees', 0., [-180., 180.], 
     602                    Parameter(p.id+'_mphi', 'degrees', 0., [-180., 180.], 
    603603                              'magnetic', 'magnetic longitude for '+p.description), 
    604604                ]) 
     
    640640 
    641641        Parameters marked as sld will automatically have a set of associated 
    642         magnetic parameters (m0:p, mtheta:p, mphi:p), as well as polarization 
    643         information (up:theta, up:frac_i, up:frac_f). 
     642        magnetic parameters (p_M0, p_mtheta, p_mphi), as well as polarization 
     643        information (up_theta, up_frac_i, up_frac_f). 
    644644        """ 
    645645        # control parameters go first 
     
    668668            result.append(expanded_pars[name]) 
    669669            if is2d: 
    670                 for tag in 'M0:', 'mtheta:', 'mphi:': 
    671                     if tag+name in expanded_pars: 
    672                         result.append(expanded_pars[tag+name]) 
     670                for tag in '_M0', '_mtheta', '_mphi': 
     671                    if name+tag in expanded_pars: 
     672                        result.append(expanded_pars[name+tag]) 
    673673 
    674674        # Gather the user parameters in order 
     
    703703                append_group(p.id) 
    704704 
    705         if is2d and 'up:angle' in expanded_pars: 
     705        if is2d and 'up_angle' in expanded_pars: 
    706706            result.extend([ 
    707                 expanded_pars['up:frac_i'], 
    708                 expanded_pars['up:frac_f'], 
    709                 expanded_pars['up:angle'], 
     707                expanded_pars['up_frac_i'], 
     708                expanded_pars['up_frac_f'], 
     709                expanded_pars['up_angle'], 
    710710            ]) 
    711711 
     
    793793    info.structure_factor = getattr(kernel_module, 'structure_factor', False) 
    794794    info.profile_axes = getattr(kernel_module, 'profile_axes', ['x', 'y']) 
     795    # Note: custom.load_custom_kernel_module assumes the C sources are defined 
     796    # by this attribute. 
    795797    info.source = getattr(kernel_module, 'source', []) 
    796798    info.c_code = getattr(kernel_module, 'c_code', None) 
     
    10141016                         for k in range(control+1, p.length+1) 
    10151017                         if p.length > 1) 
     1018            for p in self.parameters.kernel_parameters: 
     1019                if p.length > 1 and p.type == "sld": 
     1020                    for k in range(control+1, p.length+1): 
     1021                        base = p.id+str(k) 
     1022                        hidden.update((base+"_M0", base+"_mtheta", base+"_mphi")) 
    10161023        return hidden 
  • sasmodels/models/bcc_paracrystal.py

    r2d81cfe rda7b26b  
    11r""" 
     2.. warning:: This model and this model description are under review following  
     3             concerns raised by SasView users. If you need to use this model,  
     4             please email help@sasview.org for the latest situation. *The  
     5             SasView Developers. September 2018.* 
     6 
    27Definition 
    38---------- 
     
    1318 
    1419    I(q) = \frac{\text{scale}}{V_p} V_\text{lattice} P(q) Z(q) 
    15  
    1620 
    1721where *scale* is the volume fraction of spheres, $V_p$ is the volume of the 
     
    97101 
    98102Authorship and Verification 
    99 ---------------------------- 
     103--------------------------- 
    100104 
    101105* **Author:** NIST IGOR/DANSE **Date:** pre 2010 
  • sasmodels/models/be_polyelectrolyte.py

    ref07e95 rca77fc1  
    11r""" 
     2.. note:: Please read the Validation section below. 
     3 
    24Definition 
    35---------- 
     
    1113 
    1214    I(q) = K\frac{q^2+k^2}{4\pi L_b\alpha ^2} 
    13     \frac{1}{1+r_{0}^2(q^2+k^2)(q^2-12hC_a/b^2)} + background 
     15    \frac{1}{1+r_{0}^4(q^2+k^2)(q^2-12hC_a/b^2)} + background 
    1416 
    1517    k^2 = 4\pi L_b(2C_s + \alpha C_a) 
    1618 
    17     r_{0}^2 = \frac{1}{\alpha \sqrt{C_a} \left( b/\sqrt{48\pi L_b}\right)} 
     19    r_{0}^2 = \frac{b}{\alpha \sqrt{C_a 48\pi L_b}} 
    1820 
    1921where 
    2022 
    2123$K$ is the contrast factor for the polymer which is defined differently than in 
    22 other models and is given in barns where $1 barn = 10^{-24} cm^2$.  $K$ is 
     24other models and is given in barns where 1 $barn = 10^{-24}$ $cm^2$.  $K$ is 
    2325defined as: 
    2426 
     
    2931    a = b_p - (v_p/v_s) b_s 
    3032 
    31 where $b_p$ and $b_s$ are sum of the scattering lengths of the atoms 
    32 constituting the monomer of the polymer and the sum of the scattering lengths 
    33 of the atoms constituting the solvent molecules respectively, and $v_p$ and 
    34 $v_s$ are the partial molar volume of the polymer and the solvent respectively 
    35  
    36 $L_b$ is the Bjerrum length(|Ang|) - **Note:** This parameter needs to be 
    37 kept constant for a given solvent and temperature! 
    38  
    39 $h$ is the virial parameter (|Ang^3|/mol) - **Note:** See [#Borue]_ for the 
    40 correct interpretation of this parameter.  It incorporates second and third 
    41 virial coefficients and can be Negative. 
    42  
    43 $b$ is the monomer length(|Ang|), $C_s$ is the concentration of monovalent 
    44 salt(mol/L), $\alpha$ is the ionization degree (ionization degree : ratio of 
    45 charged monomers  to total number of monomers), $C_a$ is the polymer molar 
    46 concentration(mol/L), and $background$ is the incoherent background. 
     33where: 
     34 
     35- $b_p$ and $b_s$ are **sum of the scattering lengths of the atoms** 
     36  constituting the polymer monomer and the solvent molecules, respectively. 
     37 
     38- $v_p$ and $v_s$ are the partial molar volume of the polymer and the  
     39  solvent, respectively. 
     40 
     41- $L_b$ is the Bjerrum length (|Ang|) - **Note:** This parameter needs to be 
     42  kept constant for a given solvent and temperature! 
     43 
     44- $h$ is the virial parameter (|Ang^3|) - **Note:** See [#Borue]_ for the 
     45  correct interpretation of this parameter.  It incorporates second and third 
     46  virial coefficients and can be *negative*. 
     47 
     48- $b$ is the monomer length (|Ang|). 
     49 
     50- $C_s$ is the concentration of monovalent salt(1/|Ang^3| - internally converted from mol/L). 
     51 
     52- $\alpha$ is the degree of ionization (the ratio of charged monomers to the total  
     53  number of monomers) 
     54 
     55- $C_a$ is the polymer molar concentration (1/|Ang^3| - internally converted from mol/L) 
     56 
     57- $background$ is the incoherent background. 
    4758 
    4859For 2D data the scattering intensity is calculated in the same way as 1D, 
     
    5263 
    5364    q = \sqrt{q_x^2 + q_y^2} 
     65 
     66Validation 
     67---------- 
     68 
     69As of the last revision, this code is believed to be correct.  However it 
     70needs further validation and should be used with caution at this time.  The 
     71history of this code goes back to a 1998 implementation. It was recently noted 
     72that in that implementation, while both the polymer concentration and salt 
     73concentration were converted from experimental units of mol/L to more 
     74dimensionally useful units of 1/|Ang^3|, only the converted version of the 
     75polymer concentration was actually being used in the calculation while the 
     76unconverted salt concentration (still in apparent units of mol/L) was being  
     77used.  This was carried through to Sasmodels as used for SasView 4.1 (though  
     78the line of code converting the salt concentration to the new units was removed  
     79somewhere along the line). Simple dimensional analysis of the calculation shows  
     80that the converted salt concentration should be used, which the original code  
     81suggests was the intention, so this has now been corrected (for SasView 4.2).  
     82Once better validation has been performed this note will be removed. 
    5483 
    5584References 
     
    6695 
    6796* **Author:** NIST IGOR/DANSE **Date:** pre 2010 
    68 * **Last Modified by:** Paul Kienzle **Date:** July 24, 2016 
    69 * **Last Reviewed by:** Paul Butler and Richard Heenan **Date:** October 07, 2016 
     97* **Last Modified by:** Paul Butler **Date:** September 25, 2018 
     98* **Last Reviewed by:** Paul Butler **Date:** September 25, 2018 
    7099""" 
    71100 
     
    92121    ["contrast_factor",       "barns",   10.0,  [-inf, inf], "", "Contrast factor of the polymer"], 
    93122    ["bjerrum_length",        "Ang",      7.1,  [0, inf],    "", "Bjerrum length"], 
    94     ["virial_param",          "Ang^3/mol", 12.0,  [-inf, inf], "", "Virial parameter"], 
     123    ["virial_param",          "Ang^3", 12.0,  [-inf, inf], "", "Virial parameter"], 
    95124    ["monomer_length",        "Ang",     10.0,  [0, inf],    "", "Monomer length"], 
    96125    ["salt_concentration",    "mol/L",    0.0,  [-inf, inf], "", "Concentration of monovalent salt"], 
     
    102131 
    103132def Iq(q, 
    104        contrast_factor=10.0, 
    105        bjerrum_length=7.1, 
    106        virial_param=12.0, 
    107        monomer_length=10.0, 
    108        salt_concentration=0.0, 
    109        ionization_degree=0.05, 
    110        polymer_concentration=0.7): 
     133       contrast_factor, 
     134       bjerrum_length, 
     135       virial_param, 
     136       monomer_length, 
     137       salt_concentration, 
     138       ionization_degree, 
     139       polymer_concentration): 
    111140    """ 
    112     :param q:                     Input q-value 
    113     :param contrast_factor:       Contrast factor of the polymer 
    114     :param bjerrum_length:        Bjerrum length 
    115     :param virial_param:          Virial parameter 
    116     :param monomer_length:        Monomer length 
    117     :param salt_concentration:    Concentration of monovalent salt 
    118     :param ionization_degree:     Degree of ionization 
    119     :param polymer_concentration: Polymer molar concentration 
    120     :return:                      1-D intensity 
     141    :params: see parameter table 
     142    :return: 1-D form factor for polyelectrolytes in low salt 
     143     
     144    parameter names, units, default values, and behavior (volume, sld etc) are 
     145    defined in the parameter table.  The concentrations are converted from 
     146    experimental mol/L to dimensionaly useful 1/A3 in first two lines 
    121147    """ 
    122148 
    123     concentration = polymer_concentration * 6.022136e-4 
    124  
    125     k_square = 4.0 * pi * bjerrum_length * (2*salt_concentration + 
    126                                             ionization_degree * concentration) 
    127  
    128     r0_square = 1.0/ionization_degree/sqrt(concentration) * \ 
     149    concentration_pol = polymer_concentration * 6.022136e-4 
     150    concentration_salt = salt_concentration * 6.022136e-4 
     151 
     152    k_square = 4.0 * pi * bjerrum_length * (2*concentration_salt + 
     153                                            ionization_degree * concentration_pol) 
     154 
     155    r0_square = 1.0/ionization_degree/sqrt(concentration_pol) * \ 
    129156                (monomer_length/sqrt((48.0*pi*bjerrum_length))) 
    130157 
     
    133160 
    134161    term2 = 1.0 + r0_square**2 * (q**2 + k_square) * \ 
    135         (q**2 - (12.0 * virial_param * concentration/(monomer_length**2))) 
     162        (q**2 - (12.0 * virial_param * concentration_pol/(monomer_length**2))) 
    136163 
    137164    return term1/term2 
     
    174201 
    175202    # Accuracy tests based on content in test/utest_other_models.py 
     203    # Note that these should some day be validated beyond this self validation 
     204    # (circular reasoning). -- i.e. the "good value," at least for those with 
     205    # non zero salt concentrations, were obtained by running the current 
     206    # model in SasView and copying the appropriate result here. 
     207    #    PDB -- Sep 26, 2018 
    176208    [{'contrast_factor':       10.0, 
    177209      'bjerrum_length':         7.1, 
     
    184216     }, 0.001, 0.0948379], 
    185217 
    186     # Additional tests with larger range of parameters 
    187218    [{'contrast_factor':       10.0, 
    188219      'bjerrum_length':       100.0, 
    189220      'virial_param':           3.0, 
    190       'monomer_length':         1.0, 
    191       'salt_concentration':    10.0, 
    192       'ionization_degree':      2.0, 
    193       'polymer_concentration': 10.0, 
     221      'monomer_length':         5.0, 
     222      'salt_concentration':     1.0, 
     223      'ionization_degree':      0.1, 
     224      'polymer_concentration':  1.0, 
    194225      'background':             0.0, 
    195      }, 0.1, -3.75693800588], 
     226     }, 0.1, 0.253469484], 
    196227 
    197228    [{'contrast_factor':       10.0, 
    198229      'bjerrum_length':       100.0, 
    199230      'virial_param':           3.0, 
    200       'monomer_length':         1.0, 
    201       'salt_concentration':    10.0, 
    202       'ionization_degree':      2.0, 
    203       'polymer_concentration': 10.0, 
    204       'background':           100.0 
    205      }, 5.0, 100.029142149], 
     231      'monomer_length':         5.0, 
     232      'salt_concentration':     1.0, 
     233      'ionization_degree':      0.1, 
     234      'polymer_concentration':  1.0, 
     235      'background':             1.0, 
     236     }, 0.05, 1.738358122], 
    206237 
    207238    [{'contrast_factor':     100.0, 
    208239      'bjerrum_length':       10.0, 
    209       'virial_param':        180.0, 
    210       'monomer_length':        1.0, 
     240      'virial_param':         12.0, 
     241      'monomer_length':       10.0, 
    211242      'salt_concentration':    0.1, 
    212243      'ionization_degree':     0.5, 
    213244      'polymer_concentration': 0.1, 
    214       'background':             0.0, 
    215      }, 200., 1.80664667511e-06], 
     245      'background':           0.01, 
     246     }, 0.5, 0.012881893], 
    216247    ] 
  • sasmodels/models/fcc_paracrystal.py

    r2d81cfe rda7b26b  
    33#note - calculation requires double precision 
    44r""" 
     5.. warning:: This model and this model description are under review following  
     6             concerns raised by SasView users. If you need to use this model,  
     7             please email help@sasview.org for the latest situation. *The  
     8             SasView Developers. September 2018.* 
     9 
     10Definition 
     11---------- 
     12 
    513Calculates the scattering from a **face-centered cubic lattice** with 
    614paracrystalline distortion. Thermal vibrations are considered to be 
     
    816Paracrystalline distortion is assumed to be isotropic and characterized by 
    917a Gaussian distribution. 
    10  
    11 Definition 
    12 ---------- 
    1318 
    1419The scattering intensity $I(q)$ is calculated as 
     
    2328is the paracrystalline structure factor for a face-centered cubic structure. 
    2429 
    25 Equation (1) of the 1990 reference is used to calculate $Z(q)$, using 
    26 equations (23)-(25) from the 1987 paper for $Z1$, $Z2$, and $Z3$. 
     30Equation (1) of the 1990 reference\ [#CIT1990]_ is used to calculate $Z(q)$, 
     31using equations (23)-(25) from the 1987 paper\ [#CIT1987]_ for $Z1$, $Z2$, and 
     32$Z3$. 
    2733 
    2834The lattice correction (the occupied volume of the lattice) for a 
     
    8894---------- 
    8995 
    90 Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 
    91 (Original Paper) 
     96.. [#CIT1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 
     97   (Original Paper) 
     98.. [#CIT1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 
     99   (Corrections to FCC and BCC lattice structure calculation) 
    92100 
    93 Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 
    94 (Corrections to FCC and BCC lattice structure calculation) 
     101Authorship and Verification 
     102--------------------------- 
     103 
     104* **Author:** NIST IGOR/DANSE **Date:** pre 2010 
     105* **Last Modified by:** Paul Butler **Date:** September 29, 2016 
     106* **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016 
    95107""" 
    96108 
  • sasmodels/models/sc_paracrystal.py

    r2d81cfe rda7b26b  
    11r""" 
     2.. warning:: This model and this model description are under review following  
     3             concerns raised by SasView users. If you need to use this model,  
     4             please email help@sasview.org for the latest situation. *The  
     5             SasView Developers. September 2018.* 
     6              
     7Definition 
     8---------- 
     9 
    210Calculates the scattering from a **simple cubic lattice** with 
    311paracrystalline distortion. Thermal vibrations are considered to be 
     
    513Paracrystalline distortion is assumed to be isotropic and characterized 
    614by a Gaussian distribution. 
    7  
    8 Definition 
    9 ---------- 
    1015 
    1116The scattering intensity $I(q)$ is calculated as 
     
    2025$Z(q)$ is the paracrystalline structure factor for a simple cubic structure. 
    2126 
    22 Equation (16) of the 1987 reference is used to calculate $Z(q)$, using 
    23 equations (13)-(15) from the 1987 paper for Z1, Z2, and Z3. 
     27Equation (16) of the 1987 reference\ [#CIT1987]_ is used to calculate $Z(q)$, 
     28using equations (13)-(15) from the 1987 paper\ [#CIT1987]_ for $Z1$, $Z2$, and 
     29$Z3$. 
    2430 
    2531The lattice correction (the occupied volume of the lattice) for a simple cubic 
     
    9197Reference 
    9298--------- 
    93 Hideki Matsuoka et. al. *Physical Review B,* 36 (1987) 1754-1765 
    94 (Original Paper) 
    9599 
    96 Hideki Matsuoka et. al. *Physical Review B,* 41 (1990) 3854 -3856 
    97 (Corrections to FCC and BCC lattice structure calculation) 
     100.. [#CIT1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765 
     101   (Original Paper) 
     102.. [#CIT1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856 
     103   (Corrections to FCC and BCC lattice structure calculation) 
     104 
     105Authorship and Verification 
     106--------------------------- 
     107 
     108* **Author:** NIST IGOR/DANSE **Date:** pre 2010 
     109* **Last Modified by:** Paul Butler **Date:** September 29, 2016 
     110* **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016 
    98111""" 
    99112 
  • sasmodels/models/spinodal.py

    r475ff58 r93fe8a1  
    1212where $x=q/q_0$, $q_0$ is the peak position, $I_{max}$ is the intensity  
    1313at $q_0$ (parameterised as the $scale$ parameter), and $B$ is a flat  
    14 background. The spinodal wavelength is given by $2\pi/q_0$.  
     14background. The spinodal wavelength, $\Lambda$, is given by $2\pi/q_0$.  
     15 
     16The definition of $I_{max}$ in the literature varies. Hashimoto *et al* (1991)  
     17define it as  
     18 
     19.. math:: 
     20    I_{max} = \Lambda^3\Delta\rho^2 
     21     
     22whereas Meier & Strobl (1987) give  
     23 
     24.. math:: 
     25    I_{max} = V_z\Delta\rho^2 
     26     
     27where $V_z$ is the volume per monomer unit. 
    1528 
    1629The exponent $\gamma$ is equal to $d+1$ for off-critical concentration  
     
    2841 
    2942H. Furukawa. Dynamics-scaling theory for phase-separating unmixing mixtures: 
    30 Growth rates of droplets and scaling properties of autocorrelation functions. 
    31 Physica A 123,497 (1984). 
     43Growth rates of droplets and scaling properties of autocorrelation functions.  
     44Physica A 123, 497 (1984). 
     45 
     46H. Meier & G. Strobl. Small-Angle X-ray Scattering Study of Spinodal  
     47Decomposition in Polystyrene/Poly(styrene-co-bromostyrene) Blends.  
     48Macromolecules 20, 649-654 (1987). 
     49 
     50T. Hashimoto, M. Takenaka & H. Jinnai. Scattering Studies of Self-Assembling  
     51Processes of Polymer Blends in Spinodal Decomposition.  
     52J. Appl. Cryst. 24, 457-466 (1991). 
    3253 
    3354Revision History 
     
    3556 
    3657* **Author:**  Dirk Honecker **Date:** Oct 7, 2016 
    37 * **Revised:** Steve King    **Date:** Sep 7, 2018 
     58* **Revised:** Steve King    **Date:** Oct 25, 2018 
    3859""" 
    3960 
  • sasmodels/sasview_model.py

    raa25fc7 r12f77e9  
    6767#: set of defined models (standard and custom) 
    6868MODELS = {}  # type: Dict[str, SasviewModelType] 
     69# TODO: remove unused MODEL_BY_PATH cache once sasview no longer references it 
    6970#: custom model {path: model} mapping so we can check timestamps 
    7071MODEL_BY_PATH = {}  # type: Dict[str, SasviewModelType] 
     72#: Track modules that we have loaded so we can determine whether the model 
     73#: has changed since we last reloaded. 
     74_CACHED_MODULE = {}  # type: Dict[str, "module"] 
    7175 
    7276def find_model(modelname): 
     
    111115    Load a custom model given the model path. 
    112116    """ 
    113     model = MODEL_BY_PATH.get(path, None) 
    114     if model is not None and model.timestamp == getmtime(path): 
    115         #logger.info("Model already loaded %s", path) 
    116         return model 
    117  
    118117    #logger.info("Loading model %s", path) 
     118 
     119    # Load the kernel module.  This may already be cached by the loader, so 
     120    # only requires checking the timestamps of the dependents. 
    119121    kernel_module = custom.load_custom_kernel_module(path) 
    120     if hasattr(kernel_module, 'Model'): 
    121         model = kernel_module.Model 
     122 
     123    # Check if the module has changed since we last looked. 
     124    reloaded = kernel_module != _CACHED_MODULE.get(path, None) 
     125    _CACHED_MODULE[path] = kernel_module 
     126 
     127    # Turn the module into a model.  We need to do this in even if the 
     128    # model has already been loaded so that we can determine the model 
     129    # name and retrieve it from the MODELS cache. 
     130    model = getattr(kernel_module, 'Model', None) 
     131    if model is not None: 
    122132        # Old style models do not set the name in the class attributes, so 
    123133        # set it here; this name will be overridden when the object is created 
     
    132142        model_info = modelinfo.make_model_info(kernel_module) 
    133143        model = make_model_from_info(model_info) 
    134     model.timestamp = getmtime(path) 
    135144 
    136145    # If a model name already exists and we are loading a different model, 
     
    148157                    _previous_name, model.name, model.filename) 
    149158 
    150     MODELS[model.name] = model 
    151     MODEL_BY_PATH[path] = model 
    152     return model 
     159    # Only update the model if the module has changed 
     160    if reloaded or model.name not in MODELS: 
     161        MODELS[model.name] = model 
     162 
     163    return MODELS[model.name] 
    153164 
    154165 
     
    377388            hidden.add('background') 
    378389            self._model_info.parameters.defaults['background'] = 0. 
     390 
     391        # Update the parameter lists to exclude any hidden parameters 
     392        self.magnetic_params = tuple(pname for pname in self.magnetic_params 
     393                                     if pname not in hidden) 
     394        self.orientation_params = tuple(pname for pname in self.orientation_params 
     395                                        if pname not in hidden) 
    379396 
    380397        self._persistency_dict = {} 
     
    791808            return value, [value], [1.0] 
    792809 
     810    @classmethod 
     811    def runTests(cls): 
     812        """ 
     813        Run any tests built into the model and captures the test output. 
     814 
     815        Returns success flag and output 
     816        """ 
     817        from .model_test import check_model 
     818        return check_model(cls._model_info) 
     819 
    793820def test_cylinder(): 
    794821    # type: () -> float 
     
    878905    Model = _make_standard_model('sphere') 
    879906    model = Model() 
    880     model.setParam('M0:sld', 8) 
     907    model.setParam('sld_M0', 8) 
    881908    q = np.linspace(-0.35, 0.35, 500) 
    882909    qx, qy = np.meshgrid(q, q) 
  • sasmodels/special.py

    rdf69efa rfba9ca0  
    113113        The standard math function, tgamma(x) is unstable for $x < 1$ 
    114114        on some platforms. 
     115 
     116    sas_gammaln(x): 
     117        log gamma function sas_gammaln\ $(x) = \log \Gamma(|x|)$. 
     118 
     119        The standard math function, lgamma(x), is incorrect for single 
     120        precision on some platforms. 
     121 
     122    sas_gammainc(a, x), sas_gammaincc(a, x): 
     123        Incomplete gamma function 
     124        sas_gammainc\ $(a, x) = \int_0^x t^{a-1}e^{-t}\,dt / \Gamma(a)$ 
     125        and complementary incomplete gamma function 
     126        sas_gammaincc\ $(a, x) = \int_x^\infty t^{a-1}e^{-t}\,dt / \Gamma(a)$ 
    115127 
    116128    sas_erf(x), sas_erfc(x): 
     
    207219from numpy import pi, nan, inf 
    208220from scipy.special import gamma as sas_gamma 
     221from scipy.special import gammaln as sas_gammaln 
     222from scipy.special import gammainc as sas_gammainc 
     223from scipy.special import gammaincc as sas_gammaincc 
    209224from scipy.special import erf as sas_erf 
    210225from scipy.special import erfc as sas_erfc 
  • setup.py

    r1f991d6 r783e76f  
    2929                return version[1:-1] 
    3030    raise RuntimeError("Could not read version from %s/__init__.py"%package) 
     31 
     32install_requires = ['numpy', 'scipy'] 
     33 
     34if sys.platform=='win32' or sys.platform=='cygwin': 
     35    install_requires.append('tinycc') 
    3136 
    3237setup( 
     
    6166        'sasmodels': ['*.c', '*.cl'], 
    6267    }, 
    63     install_requires=[ 
    64     ], 
     68    install_requires=install_requires, 
    6569    extras_require={ 
     70        'full': ['docutils', 'bumps', 'matplotlib'], 
     71        'server': ['bumps'], 
    6672        'OpenCL': ["pyopencl"], 
    67         'Bumps': ["bumps"], 
    68         'TinyCC': ["tinycc"], 
    6973    }, 
    7074    build_requires=['setuptools'], 
  • doc/guide/pd/polydispersity.rst

    rd089a00 ra5cb9bc  
    1111-------------------------------------------- 
    1212 
    13 For some models we can calculate the average intensity for a population of  
    14 particles that possess size and/or orientational (ie, angular) distributions.  
    15 In SasView we call the former *polydispersity* but use the parameter *PD* to  
    16 parameterise both. In other words, the meaning of *PD* in a model depends on  
     13For some models we can calculate the average intensity for a population of 
     14particles that possess size and/or orientational (ie, angular) distributions. 
     15In SasView we call the former *polydispersity* but use the parameter *PD* to 
     16parameterise both. In other words, the meaning of *PD* in a model depends on 
    1717the actual parameter it is being applied too. 
    1818 
    19 The resultant intensity is then normalized by the average particle volume such  
     19The resultant intensity is then normalized by the average particle volume such 
    2020that 
    2121 
     
    2424  P(q) = \text{scale} \langle F^* F \rangle / V + \text{background} 
    2525 
    26 where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an  
     26where $F$ is the scattering amplitude and $\langle\cdot\rangle$ denotes an 
    2727average over the distribution $f(x; \bar x, \sigma)$, giving 
    2828 
    2929.. math:: 
    3030 
    31   P(q) = \frac{\text{scale}}{V} \int_\mathbb{R}  
     31  P(q) = \frac{\text{scale}}{V} \int_\mathbb{R} 
    3232  f(x; \bar x, \sigma) F^2(q, x)\, dx + \text{background} 
    3333 
    3434Each distribution is characterized by a center value $\bar x$ or 
    3535$x_\text{med}$, a width parameter $\sigma$ (note this is *not necessarily* 
    36 the standard deviation, so read the description of the distribution carefully),  
    37 the number of sigmas $N_\sigma$ to include from the tails of the distribution,  
    38 and the number of points used to compute the average. The center of the  
    39 distribution is set by the value of the model parameter. 
    40  
    41 The distribution width applied to *volume* (ie, shape-describing) parameters  
    42 is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$.  
    43 However, the distribution width applied to *orientation* parameters is just  
    44 $\sigma = \mathrm{PD}$. 
     36the standard deviation, so read the description carefully), the number of 
     37sigmas $N_\sigma$ to include from the tails of the distribution, and the 
     38number of points used to compute the average. The center of the distribution 
     39is set by the value of the model parameter. The meaning of a polydispersity 
     40parameter *PD* (not to be confused with a molecular weight distributions 
     41in polymer science) in a model depends on the type of parameter it is being 
     42applied too. 
     43 
     44The distribution width applied to *volume* (ie, shape-describing) parameters 
     45is relative to the center value such that $\sigma = \mathrm{PD} \cdot \bar x$. 
     46However, the distribution width applied to *orientation* (ie, angle-describing) 
     47parameters is just $\sigma = \mathrm{PD}$. 
    4548 
    4649$N_\sigma$ determines how far into the tails to evaluate the distribution, 
     
    5255 
    5356Users should note that the averaging computation is very intensive. Applying 
    54 polydispersion and/or orientational distributions to multiple parameters at  
    55 the same time, or increasing the number of points in the distribution, will  
    56 require patience! However, the calculations are generally more robust with  
     57polydispersion and/or orientational distributions to multiple parameters at 
     58the same time, or increasing the number of points in the distribution, will 
     59require patience! However, the calculations are generally more robust with 
    5760more data points or more angles. 
    5861 
     
    6669*  *Schulz Distribution* 
    6770*  *Array Distribution* 
     71*  *User-defined Distributions* 
    6872 
    6973These are all implemented as *number-average* distributions. 
    7074 
    71 Additional distributions are under consideration. 
    7275 
    7376**Beware: when the Polydispersity & Orientational Distribution panel in SasView is** 
     
    7578**This may not be suitable. See Suggested Applications below.** 
    7679 
    77 .. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace  
    78            the term 'polydispersity' (see `Pure Appl. Chem., (2009), 81(2),  
    79            351-353 <http://media.iupac.org/publications/pac/2009/pdf/8102x0351.pdf>`_  
    80            in order to make the terminology describing distributions of chemical  
    81            properties unambiguous. However, these terms are unrelated to the  
    82            proportional size distributions and orientational distributions used in  
     80.. note:: In 2009 IUPAC decided to introduce the new term 'dispersity' to replace 
     81           the term 'polydispersity' (see `Pure Appl. Chem., (2009), 81(2), 
     82           351-353 <http://media.iupac.org/publications/pac/2009/pdf/8102x0351.pdf>`_ 
     83           in order to make the terminology describing distributions of chemical 
     84           properties unambiguous. However, these terms are unrelated to the 
     85           proportional size distributions and orientational distributions used in 
    8386           SasView models. 
    8487 
     
    9295or angular orientations, consider using the Gaussian or Boltzmann distributions. 
    9396 
    94 If applying polydispersion to parameters describing angles, use the Uniform  
    95 distribution. Beware of using distributions that are always positive (eg, the  
     97If applying polydispersion to parameters describing angles, use the Uniform 
     98distribution. Beware of using distributions that are always positive (eg, the 
    9699Lognormal) because angles can be negative! 
    97100 
    98 The array distribution allows a user-defined distribution to be applied. 
     101The array distribution provides a very simple means of implementing a user- 
     102defined distribution, but without any fittable parameters. Greater flexibility 
     103is conferred by the user-defined distribution. 
    99104 
    100105.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 
     
    334339.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 
    335340 
     341User-defined Distributions 
     342^^^^^^^^^^^^^^^^^^^^^^^^^^ 
     343 
     344You can also define your own distribution by creating a python file defining a 
     345*Distribution* object with a *_weights* method.  The *_weights* method takes 
     346*center*, *sigma*, *lb* and *ub* as arguments, and can access *self.npts* 
     347and *self.nsigmas* from the distribution.  They are interpreted as follows: 
     348 
     349* *center* the value of the shape parameter (for size dispersity) or zero 
     350  if it is an angular dispersity.  This parameter may be fitted. 
     351 
     352* *sigma* the width of the distribution, which is the polydispersity parameter 
     353  times the center for size dispersity, or the polydispersity parameter alone 
     354  for angular dispersity.  This parameter may be fitted. 
     355 
     356* *lb*, *ub* are the parameter limits (lower & upper bounds) given in the model 
     357  definition file.  For example, a radius parameter has *lb* equal to zero.  A 
     358  volume fraction parameter would have *lb* equal to zero and *ub* equal to one. 
     359 
     360* *self.nsigmas* the distance to go into the tails when evaluating the 
     361  distribution.  For a two parameter distribution, this value could be 
     362  co-opted to use for the second parameter, though it will not be available 
     363  for fitting. 
     364 
     365* *self.npts* the number of points to use when evaluating the distribution. 
     366  The user will adjust this to trade calculation time for accuracy, but the 
     367  distribution code is free to return more or fewer, or use it for the third 
     368  parameter in a three parameter distribution. 
     369 
     370As an example, the code following wraps the Laplace distribution from scipy stats:: 
     371 
     372    import numpy as np 
     373    from scipy.stats import laplace 
     374 
     375    from sasmodels import weights 
     376 
     377    class Dispersion(weights.Dispersion): 
     378        r""" 
     379        Laplace distribution 
     380 
     381        .. math:: 
     382 
     383            w(x) = e^{-\sigma |x - \mu|} 
     384        """ 
     385        type = "laplace" 
     386        default = dict(npts=35, width=0, nsigmas=3)  # default values 
     387        def _weights(self, center, sigma, lb, ub): 
     388            x = self._linspace(center, sigma, lb, ub) 
     389            wx = laplace.pdf(x, center, sigma) 
     390            return x, wx 
     391 
     392You can plot the weights for a given value and width using the following:: 
     393 
     394    from numpy import inf 
     395    from matplotlib import pyplot as plt 
     396    from sasmodels import weights 
     397 
     398    # reload the user-defined weights 
     399    weights.load_weights() 
     400    x, wx = weights.get_weights('laplace', n=35, width=0.1, nsigmas=3, value=50, 
     401                                limits=[0, inf], relative=True) 
     402 
     403    # plot the weights 
     404    plt.interactive(True) 
     405    plt.plot(x, wx, 'x') 
     406 
     407The *self.nsigmas* and *self.npts* parameters are normally used to control 
     408the accuracy of the distribution integral. The *self._linspace* function 
     409uses them to define the *x* values (along with the *center*, *sigma*, 
     410*lb*, and *ub* which are passed as parameters).  If you repurpose npts or 
     411nsigmas you will need to generate your own *x*.  Be sure to honour the 
     412limits *lb* and *ub*, for example to disallow a negative radius or constrain 
     413the volume fraction to lie between zero and one. 
     414 
     415To activate a user-defined distribution, put it in a file such as *distname.py* 
     416in the *SAS_WEIGHTS_PATH* folder.  This is defined with an environment 
     417variable, defaulting to:: 
     418 
     419    SAS_WEIGHTS_PATH=~/.sasview/weights 
     420 
     421The weights path is loaded on startup.  To update the distribution definition 
     422in a running application you will need to enter the following python commands:: 
     423 
     424    import sasmodels.weights 
     425    sasmodels.weights.load_weights('path/to/distname.py') 
     426 
     427.. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 
     428 
    336429Note about DLS polydispersity 
    337430^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    338431 
    339 Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and  
    340 it should not be assumed that any of the following can be simply equated with  
     432Several measures of polydispersity abound in Dynamic Light Scattering (DLS) and 
     433it should not be assumed that any of the following can be simply equated with 
    341434the polydispersity *PD* parameter used in SasView. 
    342435 
    343 The dimensionless **Polydispersity Index (PI)** is a measure of the width of the  
    344 distribution of autocorrelation function decay rates (*not* the distribution of  
    345 particle sizes itself, though the two are inversely related) and is defined by  
     436The dimensionless **Polydispersity Index (PI)** is a measure of the width of the 
     437distribution of autocorrelation function decay rates (*not* the distribution of 
     438particle sizes itself, though the two are inversely related) and is defined by 
    346439ISO 22412:2017 as 
    347440 
     
    350443    PI = \mu_{2} / \bar \Gamma^2 
    351444 
    352 where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the  
     445where $\mu_\text{2}$ is the second cumulant, and $\bar \Gamma^2$ is the 
    353446intensity-weighted average value, of the distribution of decay rates. 
    354447 
     
    359452    PI = \sigma^2 / 2\bar \Gamma^2 
    360453 
    361 where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)**  
     454where $\sigma$ is the standard deviation, allowing a **Relative Polydispersity (RP)** 
    362455to be defined as 
    363456 
     
    366459    RP = \sigma / \bar \Gamma = \sqrt{2 \cdot PI} 
    367460 
    368 PI values smaller than 0.05 indicate a highly monodisperse system. Values  
     461PI values smaller than 0.05 indicate a highly monodisperse system. Values 
    369462greater than 0.7 indicate significant polydispersity. 
    370463 
    371 The **size polydispersity P-parameter** is defined as the relative standard  
    372 deviation coefficient of variation   
     464The **size polydispersity P-parameter** is defined as the relative standard 
     465deviation coefficient of variation 
    373466 
    374467.. math:: 
     
    377470 
    378471where $\nu$ is the variance of the distribution and $\bar R$ is the mean 
    379 value of $R$. Here, the product $P \bar R$ is *equal* to the standard  
     472value of $R$. Here, the product $P \bar R$ is *equal* to the standard 
    380473deviation of the Lognormal distribution. 
    381474 
  • sasmodels/weights.py

    r3d58247 rf41027b  
    231231)) 
    232232 
     233SAS_WEIGHTS_PATH = "~/.sasview/weights" 
     234def load_weights(pattern=None): 
     235    # type: (str) -> None 
     236    """ 
     237    Load dispersion distributions matching the given glob pattern 
     238    """ 
     239    import logging 
     240    import os 
     241    import os.path 
     242    import glob 
     243    import traceback 
     244    from .custom import load_custom_kernel_module 
     245    if pattern is None: 
     246        path = os.environ.get("SAS_WEIGHTS_PATH", SAS_WEIGHTS_PATH) 
     247        pattern = os.path.join(path, "*.py") 
     248    for filename in sorted(glob.glob(os.path.expanduser(pattern))): 
     249        try: 
     250            #print("loading weights from", filename) 
     251            module = load_custom_kernel_module(filename) 
     252            MODELS[module.Dispersion.type] = module.Dispersion 
     253        except Exception as exc: 
     254            logging.error(traceback.format_exc(exc)) 
    233255 
    234256def get_weights(disperser, n, width, nsigmas, value, limits, relative): 
Note: See TracChangeset for help on using the changeset viewer.