Changeset 6ab64c9 in sasmodels

Ignore:
Timestamp:
Oct 12, 2017 4:53:26 PM (7 years ago)
Branches:
master, core_shell_microgels, magnetic_model, ticket-1257-vesicle-product, ticket_1156, ticket_1265_superball, ticket_822_more_unit_tests
Children:
0d19f42
Parents:
706f466 (diff), 09141ff (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into ticket-776-orientation

Files:
2 deleted
37 edited

Unmodified
Removed
• doc/guide/magnetism/magnetism.rst

 r59485a4 ===========   ================================================================ M0_sld        = $D_M M_0$ Up_theta      = $\theta_{up}$ Up_theta      = $\theta_\mathrm{up}$ M_theta       = $\theta_M$ M_phi         = $\phi_M$
• doc/guide/pd/polydispersity.rst

 rf8a2baa \exp\left(-\frac{(x - \bar x)^2}{2\sigma^2}\right) where $\bar x$ is the mean of the distribution and *Norm* is a normalization factor which is determined during the numerical calculation. where $\bar x$ is the mean of the distribution and *Norm* is a normalization factor which is determined during the numerical calculation. The polydispersity is during the numerical calculation. The median value for the distribution will be the value given for the respective size parameter, for example, *radius=60*. The median value for the distribution will be the value given for the respective size parameter, for example, *radius=60*. The polydispersity is given by $\sigma$ Many commercial Dynamic Light Scattering (DLS) instruments produce a size polydispersity parameter, sometimes even given the symbol $p$ This polydispersity parameter, sometimes even given the symbol $p$\ ! This parameter is defined as the relative standard deviation coefficient of variation of the size distribution and is NOT the same as the polydispersity
• doc/guide/resolution.rst

 r30b60d2 resolution contribution into a model calculation/simulation (which by definition will be exact) to make it more representative of what has been measured experimentally - a process called *smearing*. sasmodels does the latter. experimentally - a process called *smearing*. Sasmodels does the latter. Both smearing and desmearing rely on functions to describe the resolution effect. sasmodels provides three smearing algorithms: effect. Sasmodels provides three smearing algorithms: *  *Slit Smearing* For discrete $q$ values, at the $q$ values of the data points and at the $q$ values extended up to $q_N = q_i + \Delta q_v$ the smeared values extended up to $q_N = q_i + \Delta q_u$ the smeared intensity can be approximately calculated as
• doc/guide/scripting.rst

 r2e66ef5 \$ bumps example/model.py --preview Note that bumps and sasmodels are included as part of the SasView distribution.  On windows, bumps can be called from the cmd prompt as follows:: SasViewCom bumps.cli example/model.py --preview Using sasmodels directly ======================== plt.loglog(q, Iq) plt.show() On windows, this can be called from the cmd prompt using sasview as:: SasViewCom example/cylinder_eval.py
• example/oriented_usans.py

 r1cd24b4 # Spherical particle data, not ellipsoids sans, usans = load_data('../../sasview/sasview/test/1d_data/latex_smeared.xml') sans, usans = load_data('latex_smeared.xml', index='all') usans.qmin, usans.qmax = np.min(usans.x), np.max(usans.x) usans.mask = (usans.x < 0.0)

• sasmodels/bumps_model.py

 r3330bb4 """ _cache = None # type: Dict[str, np.ndarray] def __init__(self, data, model, cutoff=1e-5): def __init__(self, data, model, cutoff=1e-5, name=None): # type: (Data, Model, float) -> None # remember inputs so we can inspect from outside self.name = data.filename if name is None else name self.model = model self.cutoff = cutoff """ data, theory, resid = self._data, self.theory(), self.residuals() plot_theory(data, theory, resid, view, Iq_calc=self.Iq_calc) # TODO: hack to display oriented usans 2-D pattern Iq_calc = self.Iq_calc if isinstance(self.Iq_calc, tuple) else None plot_theory(data, theory, resid, view, Iq_calc=Iq_calc) def simulate_data(self, noise=None):
• sasmodels/compare.py

 r765eb0e % (pd, n, nsigma, nsigma, pdtype) if M0 != 0.: line += "  M0:%.3f  mphi:%.1f  mtheta:%.1f" % (M0, mphi, mtheta) line += "  M0:%.3f  mtheta:%.1f  mphi:%.1f" % (M0, mtheta, mphi) return line

• doc/genapi.py

 r2e66ef5 #('alignment', 'GPU data alignment [unused]'), ('bumps_model', 'Bumps interface'), ('compare', 'Compare models on different compute engines'), ('compare_many', 'Batch compare models on different compute engines'), ('compare', 'Compare models on different compute engines'), ('conversion_table', 'Model conversion table'), ('convert', 'Sasview to sasmodel converter'), ('core', 'Model access'), ('sasview_model', 'Sasview interface'), ('sesans', 'SESANS calculation routines'), ('special', 'Special functions library'), ('weights', 'Distribution functions'), ]

• sasmodels/details.py

 rccd5f01 import numpy as np  # type: ignore from numpy import pi, cos, sin from numpy import pi, cos, sin, radians try: try: from typing import List from typing import List, Tuple, Sequence except ImportError: pass else: from .modelinfo import ModelInfo from .modelinfo import ParameterTable coordinates, the total circumference decreases as latitude varies from pi r^2 at the equator to 0 at the pole, and the weight associated with a range of phi values needs to be scaled by this circumference. with a range of latitude values needs to be scaled by this circumference. This scale factor needs to be updated each time the theta value changes.  *theta_par* indicates which of the values in the parameter nvalues = kernel.info.parameters.nvalues scalars = [(v[0] if len(v) else np.NaN) for v, w in pairs] values, weights = zip(*pairs[2:npars+2]) if npars else ((),()) # skipping scale and background when building values and weights values, weights = zip(*pairs[2:npars+2]) if npars else ((), ()) weights = correct_theta_weights(kernel.info.parameters, values, weights) length = np.array([len(w) for w in weights]) offset = np.cumsum(np.hstack((0, length))) return call_details, data, is_magnetic def correct_theta_weights(parameters, values, weights): # type: (ParameterTable, Sequence[np.ndarray], Sequence[np.ndarray]) -> Sequence[np.ndarray] """ If there is a theta parameter, update the weights of that parameter so that the cosine weighting required for polar integration is preserved.  Avoid evaluation strictly at the pole, which would otherwise send the weight to zero. Note: values and weights do not include scale and background """ # TODO: document code, explaining why scale and background are skipped # given that we don't have scale and background in the list, we # should be calling the variables something other than values and weights # Apparently the parameters.theta_offset similarly skips scale and # and background, so the indexing works out. if parameters.theta_offset >= 0: index = parameters.theta_offset theta = values[index] theta_weight = np.minimum(abs(cos(radians(theta))), 1e-6) # copy the weights list so we can update it weights = list(weights) weights[index] = theta_weight*np.asarray(weights[index]) weights = tuple(weights) return weights def convert_magnetism(parameters, values): # type: (ParameterTable, Sequence[np.ndarray]) -> bool """ Convert magnetism values from polar to rectangular coordinates. scale = mag[:,0] if np.any(scale): theta, phi = mag[:, 1]*pi/180., mag[:, 2]*pi/180. theta, phi = radians(mag[:, 1]), radians(mag[:, 2]) cos_theta = cos(theta) mag[:, 0] = scale*cos_theta*cos(phi)  # mx """ Create a mesh grid of dispersion parameters and weights. pars is a list of pairs (values, weights), where the values are the individual parameter values at which to evaluate the polydispersity distribution and weights are the weights associated with each value. Only the volume parameters should be included in this list.  Orientation parameters do not affect the calculation of effective radius or volume ratio. Returns [p1,p2,...],w where pj is a vector of values for parameter j

 rbb4b509 SINCOS(phi*M_PI_180, sn, cn); \ q = sqrt(qx*qx + qy*qy); \ cn  = (q==0. ? 1.0 : (cn*qx + sn*qy)/q * sin(theta*M_PI_180));  \ cn = (q==0. ? 1.0 : (cn*qx + sn*qy)/q * sin(theta*M_PI_180));  \ sn = sqrt(1 - cn*cn); \ } while (0)
• sasmodels/kernel_iq.c

 rbde38b5 int32_t num_weights;        // total length of the weights vector int32_t num_active;         // number of non-trivial pd loops int32_t theta_par;          // id of spherical correction variable int32_t theta_par;          // id of spherical correction variable (not used) } ProblemDetails; #if MAX_PD>0 const int theta_par = details->theta_par; const int fast_theta = (theta_par == p0); const int slow_theta = (theta_par >= 0 && !fast_theta); double spherical_correction = 1.0; #else // Note: if not polydisperse the weights cancel and we don't need the // spherical correction. const double spherical_correction = 1.0; #endif int step = pd_start; #endif #if MAX_PD>0 if (slow_theta) { // Theta is not in inner loop spherical_correction = fmax(fabs(cos(M_PI_180*local_values.vector[theta_par])), 1.e-6); } while(i0 < n0) { local_values.vector[p0] = v0[i0]; double weight0 = w0[i0] * weight1; //printf("step:%d level %d: p:%d i:%d n:%d value:%g weight:%g\n", step, 0, p0, i0, n0, local_values.vector[p0], weight0); if (fast_theta) { // Theta is in inner loop spherical_correction = fmax(fabs(cos(M_PI_180*local_values.vector[p0])), 1.e-6); } #else const double weight0 = 1.0; // Note: weight==0 must always be excluded if (weight0 > cutoff) { // spherical correction is set at a minimum of 1e-6, otherwise there // would be problems looking at models with theta=90. const double weight = weight0 * spherical_correction; pd_norm += weight * CALL_VOLUME(local_values.table); pd_norm += weight0 * CALL_VOLUME(local_values.table); #ifdef USE_OPENMP #endif // !MAGNETIC //printf("q_index:%d %g %g %g %g\n",q_index, scattering, weight, spherical_correction, weight0); result[q_index] += weight * scattering; result[q_index] += weight0 * scattering; } }
• sasmodels/kernel_iq.cl

 rbde38b5 int32_t num_weights;        // total length of the weights vector int32_t num_active;         // number of non-trivial pd loops int32_t theta_par;          // id of spherical correction variable int32_t theta_par;          // id of spherical correction variable (not used) } ProblemDetails; #if MAX_PD>0 const int theta_par = details->theta_par; const bool fast_theta = (theta_par == p0); const bool slow_theta = (theta_par >= 0 && !fast_theta); double spherical_correction = 1.0; #else // Note: if not polydisperse the weights cancel and we don't need the // spherical correction. const double spherical_correction = 1.0; #endif int step = pd_start; #endif #if MAX_PD>0 if (slow_theta) { // Theta is not in inner loop spherical_correction = fmax(fabs(cos(M_PI_180*local_values.vector[theta_par])), 1.e-6); } while(i0 < n0) { local_values.vector[p0] = v0[i0]; double weight0 = w0[i0] * weight1; //if (q_index == 0) printf("step:%d level %d: p:%d i:%d n:%d value:%g weight:%g\n", step, 0, p0, i0, n0, local_values.vector[p0], weight0); if (fast_theta) { // Theta is in inner loop spherical_correction = fmax(fabs(cos(M_PI_180*local_values.vector[p0])), 1.e-6); } #else const double weight0 = 1.0; //if (q_index == 0) {printf("step:%d of %d, pars:",step,pd_stop); for (int i=0; i < NUM_PARS; i++) printf("p%d=%g ",i, local_values.vector[i]); printf("\n"); } //if (q_index == 0) printf("sphcor: %g\n", spherical_correction); #ifdef INVALID // Note: weight==0 must always be excluded if (weight0 > cutoff) { // spherical correction is set at a minimum of 1e-6, otherwise there // would be problems looking at models with theta=90. const double weight = weight0 * spherical_correction; pd_norm += weight * CALL_VOLUME(local_values.table); pd_norm += weight0 * CALL_VOLUME(local_values.table); #if defined(MAGNETIC) && NUM_MAGNETIC > 0 const double scattering = CALL_IQ(q, q_index, local_values.table); #endif // !MAGNETIC this_result += weight * scattering; this_result += weight0 * scattering; } }
• sasmodels/kernelpy.py

 r3b32f8d pd_norm = 0.0 spherical_correction = 1.0 partial_weight = np.NaN weight = np.NaN p0_par = call_details.pd_par[0] p0_is_theta = (p0_par == call_details.theta_par) p0_length = call_details.pd_length[0] p0_index = p0_length parameters[pd_par] = pd_value[pd_offset+pd_index] partial_weight = np.prod(pd_weight[pd_offset+pd_index][1:]) if call_details.theta_par >= 0: cor = sin(pi / 180 * parameters[call_details.theta_par]) spherical_correction = max(abs(cor), 1e-6) p0_index = loop_index%p0_length weight = partial_weight * pd_weight[p0_offset + p0_index] parameters[p0_par] = pd_value[p0_offset + p0_index] if p0_is_theta: cor = cos(pi/180 * parameters[p0_par]) spherical_correction = max(abs(cor), 1e-6) p0_index += 1 if weight > cutoff: # update value and norm weight *= spherical_correction total += weight * Iq pd_norm += weight * form_volume()
• sasmodels/model_test.py

 r65314f7 suite = unittest.TestSuite() if models[0] == 'all': if models[0] in core.KINDS: skip = models[1:] models = list_models() models = list_models(models[0]) else: skip = []

• sasmodels/models/core_shell_bicelle_elliptical.c

 rdedcf34 double thick_face, double length, double rhoc, double rhoh, double rhor, double rhosolv) double sld_core, double sld_face, double sld_rim, double sld_solvent) { double si1,si2,be1,be2; // core_shell_bicelle_elliptical, RKH Dec 2016, based on elliptical_cylinder and core_shell_bicelle // tested against limiting cases of cylinder, elliptical_cylinder, stacked_discs, and core_shell_bicelle //    const double uplim = M_PI_4; const double halfheight = 0.5*length; //const double va = 0.0; //const double vb = 1.0; // inner integral limits //const double vaj=0.0; //const double vbj=M_PI; const double r_major = r_minor * x_core; const double r2A = 0.5*(square(r_major) + square(r_minor)); const double r2B = 0.5*(square(r_major) - square(r_minor)); const double dr1 = (rhoc-rhoh)   *M_PI*r_minor*r_major*(2.0*halfheight);; const double dr2 = (rhor-rhosolv)*M_PI*(r_minor+thick_rim)*(r_major+thick_rim)*2.0*(halfheight+thick_face); const double dr3 = (rhoh-rhor)   *M_PI*r_minor*r_major*2.0*(halfheight+thick_face); //const double vol1 = M_PI*r_minor*r_major*(2.0*halfheight); //const double vol2 = M_PI*(r_minor+thick_rim)*(r_major+thick_rim)*2.0*(halfheight+thick_face); //const double vol3 = M_PI*r_minor*r_major*2.0*(halfheight+thick_face); const double vol1 = M_PI*r_minor*r_major*(2.0*halfheight); const double vol2 = M_PI*(r_minor+thick_rim)*(r_major+thick_rim)*2.0*(halfheight+thick_face); const double vol3 = M_PI*r_minor*r_major*2.0*(halfheight+thick_face); const double dr1 = vol1*(sld_core-sld_face); const double dr2 = vol2*(sld_rim-sld_solvent); const double dr3 = vol3*(sld_face-sld_rim); //initialize integral for(int i=0;i<76;i++) { //setup inner integral over the ellipsoidal cross-section // since we generate these lots of times, why not store them somewhere? //const double cos_alpha = ( Gauss76Z[i]*(vb-va) + va + vb )/2.0; const double cos_alpha = ( Gauss76Z[i] + 1.0 )/2.0; const double sin_alpha = sqrt(1.0 - cos_alpha*cos_alpha); double inner_sum=0; double sinarg1 = q*halfheight*cos_alpha; double sinarg2 = q*(halfheight+thick_face)*cos_alpha; si1 = sas_sinx_x(sinarg1); si2 = sas_sinx_x(sinarg2); //const double va = 0.0; //const double vb = 1.0; //const double cos_theta = ( Gauss76Z[i]*(vb-va) + va + vb )/2.0; const double cos_theta = ( Gauss76Z[i] + 1.0 )/2.0; const double sin_theta = sqrt(1.0 - cos_theta*cos_theta); const double qab = q*sin_theta; const double qc = q*cos_theta; const double si1 = sas_sinx_x(halfheight*qc); const double si2 = sas_sinx_x((halfheight+thick_face)*qc); double inner_sum=0.0; for(int j=0;j<76;j++) { //76 gauss points for the inner integral (WAS 20 points,so this may make unecessarily slow, but playing safe) //const double beta = ( Gauss76Z[j]*(vbj-vaj) + vaj + vbj )/2.0; const double beta = ( Gauss76Z[j] +1.0)*M_PI_2; const double rr = sqrt(r2A - r2B*cos(beta)); double besarg1 = q*rr*sin_alpha; double besarg2 = q*(rr+thick_rim)*sin_alpha; be1 = sas_2J1x_x(besarg1); be2 = sas_2J1x_x(besarg2); inner_sum += Gauss76Wt[j] *square(dr1*si1*be1 + dr2*si2*be2 + dr3*si2*be1); // inner integral limits //const double vaj=0.0; //const double vbj=M_PI; //const double phi = ( Gauss76Z[j]*(vbj-vaj) + vaj + vbj )/2.0; const double phi = ( Gauss76Z[j] +1.0)*M_PI_2; const double rr = sqrt(r2A - r2B*cos(phi)); const double be1 = sas_2J1x_x(rr*qab); const double be2 = sas_2J1x_x((rr+thick_rim)*qab); const double fq = dr1*si1*be1 + dr2*si2*be2 + dr3*si2*be1; inner_sum += Gauss76Wt[j] * fq * fq; } //now calculate outer integral double thick_face, double length, double rhoc, double rhoh, double rhor, double rhosolv, double sld_core, double sld_face, double sld_rim, double sld_solvent, double theta, double phi, double psi) { // THIS NEEDS TESTING double q, xhat, yhat, zhat; ORIENT_ASYMMETRIC(qx, qy, theta, phi, psi, q, xhat, yhat, zhat); const double dr1 = rhoc-rhoh; const double dr2 = rhor-rhosolv; const double dr3 = rhoh-rhor; const double qa = q*xhat; const double qb = q*yhat; const double qc = q*zhat; const double dr1 = sld_core-sld_face; const double dr2 = sld_rim-sld_solvent; const double dr3 = sld_face-sld_rim; const double r_major = r_minor*x_core; const double halfheight = 0.5*length; // Compute effective radius in rotated coordinates const double r_hat = sqrt(square(r_major*xhat) + square(r_minor*yhat)); const double rshell_hat = sqrt(square((r_major+thick_rim)*xhat) + square((r_minor+thick_rim)*yhat)); const double be1 = sas_2J1x_x( q*r_hat ); const double be2 = sas_2J1x_x( q*rshell_hat ); const double si1 = sas_sinx_x( q*halfheight*zhat ); const double si2 = sas_sinx_x( q*(halfheight + thick_face)*zhat ); const double Aq = square( vol1*dr1*si1*be1 + vol2*dr2*si2*be2 +  vol3*dr3*si2*be1); return 1.0e-4 * Aq; const double qr_hat = sqrt(square(r_major*qa) + square(r_minor*qb)); const double qrshell_hat = sqrt(square((r_major+thick_rim)*qa) + square((r_minor+thick_rim)*qb)); const double be1 = sas_2J1x_x( qr_hat ); const double be2 = sas_2J1x_x( qrshell_hat ); const double si1 = sas_sinx_x( halfheight*qc ); const double si2 = sas_sinx_x( (halfheight + thick_face)*qc ); const double fq = vol1*dr1*si1*be1 + vol2*dr2*si2*be2 +  vol3*dr3*si2*be1; return 1.0e-4 * fq*fq; }

• sasmodels/models/core_shell_ellipsoid.py

 r30b60d2 # pylint: enable=bad-whitespace, line-too-long source = ["lib/sas_3j1x_x.c", "lib/gfn.c", "lib/gauss76.c", "core_shell_ellipsoid.c"] source = ["lib/sas_3j1x_x.c", "lib/gauss76.c", "core_shell_ellipsoid.c"] def ER(radius_equat_core, x_core, thick_shell, x_polar_shell):
• sasmodels/models/core_shell_parallelepiped.c

 r92dfe0c double form_volume(double length_a, double length_b, double length_c, double form_volume(double length_a, double length_b, double length_c, double thick_rim_a, double thick_rim_b, double thick_rim_c); double Iq(double q, double core_sld, double arim_sld, double brim_sld, double crim_sld, double thick_rim_c, double theta, double phi, double psi); double form_volume(double length_a, double length_b, double length_c, double form_volume(double length_a, double length_b, double length_c, double thick_rim_a, double thick_rim_b, double thick_rim_c) { //return length_a * length_b * length_c; return length_a * length_b * length_c + 2.0 * thick_rim_a * length_b * length_c + return length_a * length_b * length_c + 2.0 * thick_rim_a * length_b * length_c + 2.0 * thick_rim_b * length_a * length_c + 2.0 * thick_rim_c * length_a * length_b; // Code converted from functions CSPPKernel and CSParallelepiped in libCylinder.c_scaled // Did not understand the code completely, it should be rechecked (Miguel Gonzalez) const double mu = 0.5 * q * length_b; //calculate volume before rescaling (in original code, but not used) //double vol = form_volume(length_a, length_b, length_c, thick_rim_a, thick_rim_b, thick_rim_c); //double vol = length_a * length_b * length_c + //       2.0 * thick_rim_a * length_b * length_c + //double vol = form_volume(length_a, length_b, length_c, thick_rim_a, thick_rim_b, thick_rim_c); //double vol = length_a * length_b * length_c + //       2.0 * thick_rim_a * length_b * length_c + //       2.0 * thick_rim_b * length_a * length_c + //       2.0 * thick_rim_c * length_a * length_b; // Scale sides by B const double a_scaled = length_a / length_b; //   ( dr0*tmp1*tmp2*tmp3*Vin + drA*(tmpt1-tmp1)*tmp2*tmp3*V1+ drB*tmp1*(tmpt2-tmp2)*tmp3*V2 + drC*tmp1*tmp2*(tmpt3-tmp3)*V3);   //  correct FF : square of sum of phase factors // This is the function called by csparallelepiped_analytical_2D_scaled, // while CSParallelepipedModel calls CSParallelepiped in libCylinder.c // while CSParallelepipedModel calls CSParallelepiped in libCylinder.c //  correct FF : sum of square of phase factors inner_total += Gauss76Wt[j] * form * form; double q, zhat, yhat, xhat; ORIENT_ASYMMETRIC(qx, qy, theta, phi, psi, q, xhat, yhat, zhat); const double qa = q*xhat; const double qb = q*yhat; const double qc = q*zhat; // cspkernel in csparallelepiped recoded here double tc = length_a + 2.0*thick_rim_c; //handle arg=0 separately, as sin(t)/t -> 1 as t->0 double siA = sas_sinx_x(0.5*q*length_a*xhat); double siB = sas_sinx_x(0.5*q*length_b*yhat); double siC = sas_sinx_x(0.5*q*length_c*zhat); double siAt = sas_sinx_x(0.5*q*ta*xhat); double siBt = sas_sinx_x(0.5*q*tb*yhat); double siCt = sas_sinx_x(0.5*q*tc*zhat); double siA = sas_sinx_x(0.5*length_a*qa); double siB = sas_sinx_x(0.5*length_b*qb); double siC = sas_sinx_x(0.5*length_c*qc); double siAt = sas_sinx_x(0.5*ta*qa); double siBt = sas_sinx_x(0.5*tb*qb); double siCt = sas_sinx_x(0.5*tc*qc); // f uses Vin, V1, V2, and V3 and it seems to have more sense than the value computed + drB*siA*(siBt-siB)*siC*V2 + drC*siA*siB*(siCt*siCt-siC)*V3); return 1.0e-4 * f * f; }

• sasmodels/models/parallelepiped.c

 rd605080 { const double mu = 0.5 * q * length_b; // Scale sides by B const double a_scaled = length_a / length_b; const double c_scaled = length_c / length_b; // outer integral (with gauss points), integration limits = 0, 1 double outer_total = 0; //initialize integral double q, xhat, yhat, zhat; ORIENT_ASYMMETRIC(qx, qy, theta, phi, psi, q, xhat, yhat, zhat); const double qa = q*xhat; const double qb = q*yhat; const double qc = q*zhat; const double siA = sas_sinx_x(0.5*length_a*q*xhat); const double siB = sas_sinx_x(0.5*length_b*q*yhat); const double siC = sas_sinx_x(0.5*length_c*q*zhat); const double siA = sas_sinx_x(0.5*length_a*qa); const double siB = sas_sinx_x(0.5*length_b*qb); const double siC = sas_sinx_x(0.5*length_c*qc); const double V = form_volume(length_a, length_b, length_c); const double drho = (sld - solvent_sld);