Changeset 5c84add in sasview
- Timestamp:
- Oct 1, 2016 8:39:20 AM (8 years ago)
- Branches:
- master, ESS_GUI, ESS_GUI_Docs, ESS_GUI_batch_fitting, ESS_GUI_bumps_abstraction, ESS_GUI_iss1116, ESS_GUI_iss879, ESS_GUI_iss959, ESS_GUI_opencl, ESS_GUI_ordering, ESS_GUI_sync_sascalc, costrafo411, magnetic_scatt, release-4.1.1, release-4.1.2, release-4.2.2, release_4.0.1, ticket-1009, ticket-1094-headless, ticket-1242-2d-resolution, ticket-1243, ticket-1249, ticket885, unittest-saveload
- Children:
- 624235e
- Parents:
- 601b93d (diff), 9bbc074 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent. - Files:
-
- 9 added
- 26 deleted
- 18 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified .travis.yml ¶
r937529e r58918de 12 12 system_site_packages: true 13 13 before_install: 14 - 'if [ $TRAVIS_PYTHON_VERSION == "2.7" ]; then sudo apt-get update;sudo apt-get install python-numpy python-scipy python-matplotlib libhdf5-serial-dev python-h5py fglrx opencl-headers ; fi'14 - 'if [ $TRAVIS_PYTHON_VERSION == "2.7" ]; then sudo apt-get update;sudo apt-get install python-numpy python-scipy python-matplotlib libhdf5-serial-dev python-h5py fglrx opencl-headers python-pyopencl; fi' 15 15 16 16 install: 17 17 - pip install -r build_tools/requirements.txt 18 - pip install pyopencl19 18 20 19 before_script: … … 31 30 - ls -ltr 32 31 - if [ ! -d "utils" ]; then mkdir utils; fi 33 - /bin/sh -xe sasview/build_tools/ jenkins_linux_build.sh32 - /bin/sh -xe sasview/build_tools/travis_build.sh 34 33 # - /bin/sh -xe sasview/build_tools/jenkins_linux_test.sh 35 34 - export LC_ALL=en_US.UTF-8 -
TabularUnified docs/sphinx-docs/source/user/sasgui/perspectives/invariant/invariant_help.rst ¶
r49148bb r9bbc074 4 4 .. by S King, ISIS, during SasView CodeCamp-III in Feb 2015. 5 5 6 Invariant Calculation Perspective7 ===================== ============6 Invariant Calculation 7 ===================== 8 8 9 9 Description … … 19 19 .. image:: image001.gif 20 20 21 where *g = Q* for pinhole geometry (SAS) and *g = Qv*(the slit height) for21 where *g = q* for pinhole geometry (SAS) and *g = q*\ :sub:`v` (the slit height) for 22 22 slit geometry (USAS). 23 23 … … 45 45 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 46 46 47 Using the perspective48 --------------------- 47 Using invariant analysis 48 ------------------------ 49 49 50 50 1) Select *Invariant* from the *Analysis* menu on the SasView toolbar. … … 53 53 54 54 3) Select a dataset and use the *Send To* button on the *Data Explorer* to load 55 the dataset into the *Invariant* p erspective.55 the dataset into the *Invariant* panel. 56 56 57 4) Use the *Customised Input* boxes on the *Invariant* p erspectiveto subtract57 4) Use the *Customised Input* boxes on the *Invariant* panel to subtract 58 58 any background, specify the contrast (i.e. difference in SLDs - this must be 59 59 specified for the eventual value of Q*\ to be on an absolute scale), or to … … 73 73 74 74 8) If the value of Q*\ calculated with the extrapolated regions is invalid, a 75 red warning will appear at the top of the *Invariant* p erspective panel.75 red warning will appear at the top of the *Invariant* panel. 76 76 77 77 The details of the calculation are available by clicking the *Details* -
TabularUnified docs/sphinx-docs/source/user/user.rst ¶
r5a71761 r20a3c55 14 14 15 15 Working with SasView <working> 16 17 Computations with GPU <gpu_computations> -
TabularUnified sasview/local_config.py ¶
rd85c194 r9bbc074 82 82 _corner_image = os.path.join(icon_path, "angles_flat.png") 83 83 _welcome_image = os.path.join(icon_path, "SVwelcome.png") 84 _copyright = "(c) 2009 - 201 3, UTK, UMD, NIST, ORNL, ISIS, ESS and ILL"84 _copyright = "(c) 2009 - 2016, UTK, UMD, NIST, ORNL, ISIS, ESS and ILL" 85 85 86 86 -
TabularUnified sasview/setup_exe.py ¶
r525aaa2 r9bbc074 165 165 self.version = local_config.__version__ 166 166 self.company_name = "SasView.org" 167 self.copyright = "copyright 2009 - 201 3"167 self.copyright = "copyright 2009 - 2016" 168 168 self.name = "SasView" 169 169 -
TabularUnified src/sas/sascalc/dataloader/readers/ascii_reader.py ¶
rb699768 r7d94915 33 33 ## File type 34 34 type_name = "ASCII" 35 35 36 36 ## Wildcards 37 37 type = ["ASCII files (*.txt)|*.txt", … … 41 41 ## List of allowed extensions 42 42 ext = ['.txt', '.TXT', '.dat', '.DAT', '.abs', '.ABS', 'csv', 'CSV'] 43 43 44 44 ## Flag to bypass extension check 45 45 allow_all = True 46 46 47 47 def read(self, path): 48 48 """ 49 49 Load data file 50 50 51 51 :param path: file path 52 53 52 :return: Data1D object, or None 54 53 55 54 :raise RuntimeError: when the file can't be opened 56 55 :raise ValueError: when the length of the data vectors are inconsistent … … 62 61 try: 63 62 # Read in binary mode since GRASP frequently has no-ascii 64 # characters that br akes the open operation63 # characters that breaks the open operation 65 64 input_f = open(path,'rb') 66 65 except: … … 68 67 buff = input_f.read() 69 68 lines = buff.splitlines() 70 71 x = numpy.zeros(0) 72 y = numpy.zeros(0) 73 dy = numpy.zeros(0) 74 dx = numpy.zeros(0) 75 76 #temp. space to sort data 77 tx = numpy.zeros(0) 78 ty = numpy.zeros(0) 69 70 # Arrays for data storage 71 tx = numpy.zeros(0) 72 ty = numpy.zeros(0) 79 73 tdy = numpy.zeros(0) 80 74 tdx = numpy.zeros(0) 81 82 output = Data1D(x, y, dy=dy, dx=dx) 83 self.filename = output.filename = basename 84 85 data_conv_q = None 86 data_conv_i = None 87 88 if has_converter == True and output.x_unit != '1/A': 89 data_conv_q = Converter('1/A') 90 # Test it 91 data_conv_q(1.0, output.x_unit) 92 93 if has_converter == True and output.y_unit != '1/cm': 94 data_conv_i = Converter('1/cm') 95 # Test it 96 data_conv_i(1.0, output.y_unit) 97 75 98 76 # The first good line of data will define whether 99 77 # we have 2-column or 3-column ascii 100 78 has_error_dx = None 101 79 has_error_dy = None 102 80 103 81 #Initialize counters for data lines and header lines. 104 is_data = False # Has more than 5 lines82 is_data = False 105 83 # More than "5" lines of data is considered as actual 106 84 # data unless that is the only data 107 m um_data_lines = 585 min_data_pts = 5 108 86 # To count # of current data candidate lines 109 i = -187 candidate_lines = 0 110 88 # To count total # of previous data candidate lines 111 i1 = -1 112 # To count # of header lines 113 j = -1 114 # Helps to count # of header lines 115 j1 = -1 116 #minimum required number of columns of data; ( <= 4). 89 candidate_lines_previous = 0 90 #minimum required number of columns of data 117 91 lentoks = 2 118 92 for line in lines: 119 # Initial try for CSV (split on ,) 120 toks = line.split(',') 121 # Now try SCSV (split on ;) 122 if len(toks) < 2: 123 toks = line.split(';') 124 # Now go for whitespace 125 if len(toks) < 2: 126 toks = line.split() 93 toks = self.splitline(line) 94 # To remember the # of columns in the current line of data 95 new_lentoks = len(toks) 127 96 try: 97 if new_lentoks == 1 and not is_data: 98 ## If only one item in list, no longer data 99 raise ValueError 100 elif new_lentoks == 0: 101 ## If the line is blank, skip and continue on 102 ## In case of breaks within data sets. 103 continue 104 elif new_lentoks != lentoks and is_data: 105 ## If a footer is found, break the loop and save the data 106 break 107 elif new_lentoks != lentoks and not is_data: 108 ## If header lines are numerical 109 candidate_lines = 0 110 candidate_lines_previous = 0 111 128 112 #Make sure that all columns are numbers. 129 113 for colnum in range(len(toks)): 114 # Any non-floating point values throw ValueError 130 115 float(toks[colnum]) 131 116 117 candidate_lines += 1 132 118 _x = float(toks[0]) 133 119 _y = float(toks[1]) 134 135 #Reset the header line counters 136 if j == j1: 137 j = 0 138 j1 = 0 139 140 if i > 1: 120 _dx = None 121 _dy = None 122 123 #If 5 or more lines, this is considering the set data 124 if candidate_lines >= min_data_pts: 141 125 is_data = True 142 143 if data_conv_q is not None: 144 _x = data_conv_q(_x, units=output.x_unit) 145 146 if data_conv_i is not None: 147 _y = data_conv_i(_y, units=output.y_unit) 148 149 # If we have an extra token, check 150 # whether it can be interpreted as a 151 # third column. 152 _dy = None 153 if len(toks) > 2: 154 try: 155 _dy = float(toks[2]) 156 157 if data_conv_i is not None: 158 _dy = data_conv_i(_dy, units=output.y_unit) 159 160 except: 161 # The third column is not a float, skip it. 162 pass 163 164 # If we haven't set the 3rd column 165 # flag, set it now. 166 if has_error_dy == None: 167 has_error_dy = False if _dy == None else True 168 169 #Check for dx 170 _dx = None 171 if len(toks) > 3: 172 try: 173 _dx = float(toks[3]) 174 175 if data_conv_i is not None: 176 _dx = data_conv_i(_dx, units=output.x_unit) 177 178 except: 179 # The 4th column is not a float, skip it. 180 pass 181 182 # If we haven't set the 3rd column 183 # flag, set it now. 184 if has_error_dx == None: 185 has_error_dx = False if _dx == None else True 186 187 #After talked with PB, we decided to take care of only 188 # 4 columns of data for now. 189 #number of columns in the current line 190 #To remember the # of columns in the current 191 #line of data 192 new_lentoks = len(toks) 193 194 #If the previous columns not equal to the current, 195 #mark the previous as non-data and reset the dependents. 196 if lentoks != new_lentoks: 197 if is_data == True: 198 break 199 else: 200 i = -1 201 i1 = 0 202 j = -1 203 j1 = -1 204 205 #Delete the previously stored lines of data candidates 206 # if is not data. 207 if i < 0 and -1 < i1 < mum_data_lines and \ 208 is_data == False: 209 try: 210 x = numpy.zeros(0) 211 y = numpy.zeros(0) 212 except: 213 pass 214 215 x = numpy.append(x, _x) 216 y = numpy.append(y, _y) 217 218 if has_error_dy == True: 219 #Delete the previously stored lines of 220 # data candidates if is not data. 221 if i < 0 and -1 < i1 < mum_data_lines and \ 222 is_data == False: 223 try: 224 dy = numpy.zeros(0) 225 except: 226 pass 227 dy = numpy.append(dy, _dy) 228 229 if has_error_dx == True: 230 #Delete the previously stored lines of 231 # data candidates if is not data. 232 if i < 0 and -1 < i1 < mum_data_lines and \ 233 is_data == False: 234 try: 235 dx = numpy.zeros(0) 236 except: 237 pass 238 dx = numpy.append(dx, _dx) 239 240 #Same for temp. 241 #Delete the previously stored lines of data candidates 242 # if is not data. 243 if i < 0 and -1 < i1 < mum_data_lines and\ 126 127 # If a 3rd row is present, consider it dy 128 if new_lentoks > 2: 129 _dy = float(toks[2]) 130 has_error_dy = False if _dy == None else True 131 132 # If a 4th row is present, consider it dx 133 if new_lentoks > 3: 134 _dx = float(toks[3]) 135 has_error_dx = False if _dx == None else True 136 137 # Delete the previously stored lines of data candidates if 138 # the list is not data 139 if candidate_lines == 1 and -1 < candidate_lines_previous < min_data_pts and \ 244 140 is_data == False: 245 141 try: 246 142 tx = numpy.zeros(0) 247 143 ty = numpy.zeros(0) 144 tdy = numpy.zeros(0) 145 tdx = numpy.zeros(0) 248 146 except: 249 147 pass 250 148 149 if has_error_dy == True: 150 tdy = numpy.append(tdy, _dy) 151 if has_error_dx == True: 152 tdx = numpy.append(tdx, _dx) 251 153 tx = numpy.append(tx, _x) 252 154 ty = numpy.append(ty, _y) 253 254 if has_error_dy == True: 255 #Delete the previously stored lines of 256 # data candidates if is not data. 257 if i < 0 and -1 < i1 < mum_data_lines and \ 258 is_data == False: 259 try: 260 tdy = numpy.zeros(0) 261 except: 262 pass 263 tdy = numpy.append(tdy, _dy) 264 if has_error_dx == True: 265 #Delete the previously stored lines of 266 # data candidates if is not data. 267 if i < 0 and -1 < i1 < mum_data_lines and \ 268 is_data == False: 269 try: 270 tdx = numpy.zeros(0) 271 except: 272 pass 273 tdx = numpy.append(tdx, _dx) 274 275 #reset i1 and flag lentoks for the next 276 if lentoks < new_lentoks: 277 if is_data == False: 278 i1 = -1 155 279 156 #To remember the # of columns on the current line 280 157 # for the next line of data 281 lentoks = len(toks) 282 283 #Reset # of header lines and counts # 284 # of data candidate lines 285 if j == 0 and j1 == 0: 286 i1 = i + 1 287 i += 1 288 except: 158 lentoks = new_lentoks 159 candidate_lines_previous = candidate_lines 160 except ValueError: 289 161 # It is data and meet non - number, then stop reading 290 162 if is_data == True: 291 163 break 292 164 lentoks = 2 293 #Counting # of header lines 294 j += 1 295 if j == j1 + 1: 296 j1 = j 297 else: 298 j = -1 165 has_error_dx = None 166 has_error_dy = None 299 167 #Reset # of lines of data candidates 300 i = -1 301 302 # Couldn't parse this line, skip it 168 candidate_lines = 0 169 except: 303 170 pass 304 171 305 172 input_f.close() 173 if not is_data: 174 return None 306 175 # Sanity check 307 if has_error_dy == True and not len( y) == len(dy):176 if has_error_dy == True and not len(ty) == len(tdy): 308 177 msg = "ascii_reader: y and dy have different length" 309 178 raise RuntimeError, msg 310 if has_error_dx == True and not len( x) == len(dx):179 if has_error_dx == True and not len(tx) == len(tdx): 311 180 msg = "ascii_reader: y and dy have different length" 312 181 raise RuntimeError, msg 313 182 # If the data length is zero, consider this as 314 183 # though we were not able to read the file. 315 if len( x) == 0:184 if len(tx) == 0: 316 185 raise RuntimeError, "ascii_reader: could not load file" 317 186 318 187 #Let's re-order the data to make cal. 319 188 # curve look better some cases 320 189 ind = numpy.lexsort((ty, tx)) 190 x = numpy.zeros(len(tx)) 191 y = numpy.zeros(len(ty)) 192 dy = numpy.zeros(len(tdy)) 193 dx = numpy.zeros(len(tdx)) 194 output = Data1D(x, y, dy=dy, dx=dx) 195 self.filename = output.filename = basename 196 321 197 for i in ind: 322 198 x[i] = tx[ind[i]] … … 338 214 output.dx = dx[x != 0] if has_error_dx == True\ 339 215 else numpy.zeros(len(output.x)) 340 341 if data_conv_q is not None: 342 output.xaxis("\\rm{Q}", output.x_unit) 343 else: 344 output.xaxis("\\rm{Q}", 'A^{-1}') 345 if data_conv_i is not None: 346 output.yaxis("\\rm{Intensity}", output.y_unit) 347 else: 348 output.yaxis("\\rm{Intensity}", "cm^{-1}") 349 216 217 output.xaxis("\\rm{Q}", 'A^{-1}') 218 output.yaxis("\\rm{Intensity}", "cm^{-1}") 219 350 220 # Store loading process information 351 221 output.meta_data['loader'] = self.type_name … … 353 223 raise RuntimeError, "%s is empty" % path 354 224 return output 355 225 356 226 else: 357 227 raise RuntimeError, "%s is not a file" % path 358 228 return None 229 230 def splitline(self, line): 231 """ 232 Splits a line into pieces based on common delimeters 233 :param line: A single line of text 234 :return: list of values 235 """ 236 # Initial try for CSV (split on ,) 237 toks = line.split(',') 238 # Now try SCSV (split on ;) 239 if len(toks) < 2: 240 toks = line.split(';') 241 # Now go for whitespace 242 if len(toks) < 2: 243 toks = line.split() 244 return toks -
TabularUnified src/sas/sasgui/guiframe/media/graph_help.rst ¶
re68c9bf rf9b0c81 42 42 plot window. 43 43 44 *NOTE! If a residuals graph (when fitting data) is hidden, it will not show up 45 after computation.* 44 .. note:: 45 *If a residuals graph (when fitting data) is hidden, it will not show up 46 after computation.* 46 47 47 48 Dragging a plot … … 67 68 After zooming in on a a region, the *left arrow* or *right arrow* buttons on 68 69 the toolbar will switch between recent views. 70 71 The axis range can also be specified manually. To do so go to the *Graph Menu* 72 (see Invoking_the_graph_menu_ for further details), choose the *Set Graph Range* 73 option and enter the limits in the pop box. 69 74 70 75 *NOTE! If a wheel mouse is available scrolling the wheel will zoom in/out … … 116 121 ^^^^^^^^^^^^^^^^^^^ 117 122 118 From the *Graph Menu* (see Invoking_the_graph_menu_) it is also possible to 119 make some custom modifications to plots, including: 123 It is possible to make custom modifications to plots including: 120 124 121 125 * changing the plot window title 122 * changing the axis legend locations123 * changing the axis l egend label text124 * changing the axis l egend label units125 * changing the axis l egend label font & font colour126 * changing the default legend location and toggling it on/off 127 * changing the axis label text 128 * changing the axis label units 129 * changing the axis label font & font colour 126 130 * adding/removing a text string 127 131 * adding a grid overlay 132 133 The legend and text strings can be drag and dropped around the plot 134 135 These options are accessed through the *Graph Menu* (see Invoking_the_graph_menu_) 136 and selecting *Modify Graph Appearance* (for axis labels, grid overlay and 137 legend position) or *Add Text* to add textual annotations, selecting font, color, 138 style and size. *Remove Text* will remove the last annotation added. To change 139 the legend. *Window Title* allows a custom title to be entered instead of Graph 140 x. 128 141 129 142 Changing scales … … 234 247 selected data will be removed from the plot. 235 248 236 *NOTE! This action cannot be undone.* 249 .. note:: 250 The Remove data set action cannot be undone. 237 251 238 252 Show-Hide error bars … … 248 262 In the *Dataset Menu* (see Invoking_the_dataset_menu_), select *Modify Plot 249 263 Property* to change the size, color, or shape of the displayed marker for the 250 chosen dataset, or to change the dataset label that appears on the plot. 264 chosen dataset, or to change the dataset label that appears in the plot legend 265 box. 251 266 252 267 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ … … 292 307 average. 293 308 294 *NOTE! The displayed average only updates when input focus is moved back to 295 that window; ie, when the mouse pointer is moved onto that plot.* 309 .. note:: 310 The displayed average only updates when input focus is moved back to 311 that window; ie, when the mouse pointer is moved onto that plot. 296 312 297 313 Selecting *Box Sum* automatically brings up the 'Slicer Parameters' dialog in … … 359 375 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 360 376 361 .. note:: This help document was last changed by Steve King, 01May2015377 .. note:: This help document was last modified by Paul Butler, 05 September, 2016 -
TabularUnified src/sas/sasgui/guiframe/utils.py ¶
rd85c194 ra0373d5 46 46 return flag 47 47 48 48 49 def check_int(item): 50 """ 51 :param item: txtcrtl containing a value 52 """ 53 flag = True 54 try: 55 mini = int(item.GetValue()) 56 item.SetBackgroundColour(wx.WHITE) 57 item.Refresh() 58 except: 59 flag = False 60 item.SetBackgroundColour("pink") 61 item.Refresh() 62 return flag 63 64 49 65 class PanelMenu(wx.Menu): 50 66 """ -
TabularUnified src/sas/sasgui/perspectives/fitting/basepage.py ¶
ree4b3cb r6c382da 17 17 from wx.lib.scrolledpanel import ScrolledPanel 18 18 19 import sasmodels.sasview_model 19 from sasmodels.weights import MODELS as POLYDISPERSITY_MODELS 20 20 21 from sas.sasgui.guiframe.panel_base import PanelBase 21 from sas.sasgui.guiframe.utils import format_number, check_float, IdList 22 from sas.sasgui.guiframe.utils import format_number, check_float, IdList, check_int 22 23 from sas.sasgui.guiframe.events import PanelOnFocusEvent 23 24 from sas.sasgui.guiframe.events import StatusEvent … … 626 627 self.disp_help_bt.Bind(wx.EVT_BUTTON, self.on_pd_help_clicked, 627 628 id=self.disp_help_bt.GetId()) 628 self.disp_help_bt.SetToolTipString("Help s for Polydispersion.")629 self.disp_help_bt.SetToolTipString("Help for polydispersion.") 629 630 630 631 self.Bind(wx.EVT_RADIOBUTTON, self._set_dipers_Param, … … 932 933 if len(self._disp_obj_dict) > 0: 933 934 for k, v in self._disp_obj_dict.iteritems(): 934 self.state._disp_obj_dict[k] = v 935 self.state._disp_obj_dict[k] = v.type 935 936 936 937 self.state.values = copy.deepcopy(self.values) … … 1009 1010 if len(self._disp_obj_dict) > 0: 1010 1011 for k, v in self._disp_obj_dict.iteritems(): 1011 self.state._disp_obj_dict[k] = v 1012 self.state._disp_obj_dict[k] = v.type 1012 1013 1013 1014 self.state.values = copy.deepcopy(self.values) … … 1123 1124 state.disp_cb_dict[item]) 1124 1125 # Create the dispersion objects 1125 from sas.models.dispersion_models import ArrayDispersion 1126 disp_model = ArrayDispersion() 1126 disp_model = POLYDISPERSITY_MODELS['array']() 1127 1127 if hasattr(state, "values") and \ 1128 1128 self.disp_cb_dict[item].GetValue() == True: … … 1379 1379 self.weights = copy.deepcopy(state.weights) 1380 1380 1381 for key, disp in state._disp_obj_dict.iteritems(): 1382 # From saved file, disp_model can not be sent in model obj. 1383 # it will be sent as a string here, then converted to model object. 1384 if disp.__class__.__name__ == 'str': 1385 disp_model = None 1386 com_str = "from sasmodels.weights " 1387 com_str += "import %s as disp_func \ndisp_model = disp_func()" 1388 exec com_str % disp 1389 else: 1390 disp_model = disp 1381 for key, disp_type in state._disp_obj_dict.iteritems(): 1382 #disp_model = disp 1383 disp_model = POLYDISPERSITY_MODELS[disp_type]() 1391 1384 self._disp_obj_dict[key] = disp_model 1392 1385 param_name = key.split('.')[0] … … 2281 2274 continue 2282 2275 2283 name = str(item[1]) 2284 if name.endswith(".npts") or name.endswith(".nsigmas"): 2276 value_ctrl = item[2] 2277 if not value_ctrl.IsEnabled(): 2278 # ArrayDispersion disables PD, Min, Max, Npts, Nsigs 2285 2279 continue 2286 2280 2287 # Check that min, max and value are floats 2288 value_ctrl, min_ctrl, max_ctrl = item[2], item[5], item[6] 2289 min_str = min_ctrl.GetValue().strip() 2290 max_str = max_ctrl.GetValue().strip() 2281 name = item[1] 2291 2282 value_str = value_ctrl.GetValue().strip() 2292 validity = check_float(value_ctrl) 2293 if min_str != "": 2294 validity = validity and check_float(min_ctrl) 2295 if max_str != "": 2296 validity = validity and check_float(max_ctrl) 2297 if not validity: 2298 continue 2299 2300 # Check that min is less than max 2301 low = -numpy.inf if min_str == "" else float(min_str) 2302 high = numpy.inf if max_str == "" else float(max_str) 2303 if high < low: 2304 min_ctrl.SetBackgroundColour("pink") 2305 min_ctrl.Refresh() 2306 max_ctrl.SetBackgroundColour("pink") 2307 max_ctrl.Refresh() 2308 #msg = "Invalid fit range for %s: min must be smaller than max"%name 2309 #wx.PostEvent(self._manager.parent, StatusEvent(status=msg)) 2310 continue 2311 2312 # Force value between min and max 2313 value = float(value_str) 2314 if value < low: 2315 value = low 2316 value_ctrl.SetValue(format_number(value)) 2317 elif value > high: 2318 value = high 2319 value_ctrl.SetValue(format_number(value)) 2283 if name.endswith(".npts"): 2284 validity = check_int(value_ctrl) 2285 if not validity: 2286 continue 2287 value = int(value_str) 2288 2289 elif name.endswith(".nsigmas"): 2290 validity = check_float(value_ctrl) 2291 if not validity: 2292 continue 2293 value = float(value_str) 2294 2295 else: # value or polydispersity 2296 2297 # Check that min, max and value are floats 2298 min_ctrl, max_ctrl = item[5], item[6] 2299 min_str = min_ctrl.GetValue().strip() 2300 max_str = max_ctrl.GetValue().strip() 2301 validity = check_float(value_ctrl) 2302 if min_str != "": 2303 validity = validity and check_float(min_ctrl) 2304 if max_str != "": 2305 validity = validity and check_float(max_ctrl) 2306 if not validity: 2307 continue 2308 2309 # Check that min is less than max 2310 low = -numpy.inf if min_str == "" else float(min_str) 2311 high = numpy.inf if max_str == "" else float(max_str) 2312 if high < low: 2313 min_ctrl.SetBackgroundColour("pink") 2314 min_ctrl.Refresh() 2315 max_ctrl.SetBackgroundColour("pink") 2316 max_ctrl.Refresh() 2317 #msg = "Invalid fit range for %s: min must be smaller than max"%name 2318 #wx.PostEvent(self._manager.parent, StatusEvent(status=msg)) 2319 continue 2320 2321 # Force value between min and max 2322 value = float(value_str) 2323 if value < low: 2324 value = low 2325 value_ctrl.SetValue(format_number(value)) 2326 elif value > high: 2327 value = high 2328 value_ctrl.SetValue(format_number(value)) 2329 2330 if name not in self.model.details.keys(): 2331 self.model.details[name] = ["", None, None] 2332 old_low, old_high = self.model.details[name][1:3] 2333 if old_low != low or old_high != high: 2334 # The configuration has changed but it won't change the 2335 # computed curve so no need to set is_modified to True 2336 #is_modified = True 2337 self.model.details[name][1:3] = low, high 2320 2338 2321 2339 # Update value in model if it has changed … … 2323 2341 self.model.setParam(name, value) 2324 2342 is_modified = True 2325 2326 if name not in self.model.details.keys():2327 self.model.details[name] = ["", None, None]2328 old_low, old_high = self.model.details[name][1:3]2329 if old_low != low or old_high != high:2330 # The configuration has changed but it won't change the2331 # computed curve so no need to set is_modified to True2332 #is_modified = True2333 self.model.details[name][1:3] = low, high2334 2343 2335 2344 return is_modified … … 2504 2513 self._disp_obj_dict[name1] = disp_model 2505 2514 self.model.set_dispersion(param_name, disp_model) 2506 self.state._disp_obj_dict[name1] = disp_model 2515 self.state._disp_obj_dict[name1] = disp_model.type 2507 2516 2508 2517 value1 = str(format_number(self.model.getParam(name1), True)) … … 2527 2536 item[0].Enable() 2528 2537 item[2].Enable() 2538 item[3].Show(True) 2539 item[4].Show(True) 2529 2540 item[5].Enable() 2530 2541 item[6].Enable() … … 2619 2630 self._disp_obj_dict[name] = disp 2620 2631 self.model.set_dispersion(name.split('.')[0], disp) 2621 self.state._disp_obj_dict[name] = disp 2632 self.state._disp_obj_dict[name] = disp.type 2622 2633 self.values[name] = values 2623 2634 self.weights[name] = weights … … 2687 2698 :param disp_function: dispersion distr. function 2688 2699 """ 2689 # List of the poly_model name in the combobox2690 list = ["RectangleDispersion", "ArrayDispersion",2691 "LogNormalDispersion", "GaussianDispersion",2692 "SchulzDispersion"]2693 2694 2700 # Find the selection 2695 try: 2696 selection = list.index(disp_func.__class__.__name__) 2697 return selection 2698 except: 2699 return 3 2701 if disp_func is not None: 2702 try: 2703 return POLYDISPERSITY_MODELS.values().index(disp_func.__class__) 2704 except ValueError: 2705 pass # Fall through to default class 2706 return POLYDISPERSITY_MODELS.keys().index('gaussian') 2700 2707 2701 2708 def on_reset_clicked(self, event): … … 3284 3291 pd = content[name][1] 3285 3292 if name.count('.') > 0: 3293 # If this is parameter.width, then pd may be a floating 3294 # point value or it may be an array distribution. 3295 # Nothing to do for parameter.npts or parameter.nsigmas. 3286 3296 try: 3287 3297 float(pd) 3288 except: 3298 if name.endswith('.npts'): 3299 pd = int(pd) 3300 except Exception: 3289 3301 #continue 3290 3302 if not pd and pd != '': … … 3294 3306 # Only array func has pd == '' case. 3295 3307 item[2].Enable(False) 3308 else: 3309 item[2].Enable(True) 3296 3310 if item[2].__class__.__name__ == "ComboBox": 3297 3311 if content[name][1] in self.model.fun_list: … … 3320 3334 pd = value[0] 3321 3335 if name.count('.') > 0: 3336 # If this is parameter.width, then pd may be a floating 3337 # point value or it may be an array distribution. 3338 # Nothing to do for parameter.npts or parameter.nsigmas. 3322 3339 try: 3323 3340 pd = float(pd) 3341 if name.endswith('.npts'): 3342 pd = int(pd) 3324 3343 except: 3325 3344 #continue … … 3330 3349 # Only array func has pd == '' case. 3331 3350 item[2].Enable(False) 3351 else: 3352 item[2].Enable(True) 3332 3353 if item[2].__class__.__name__ == "ComboBox": 3333 3354 if value[0] in self.model.fun_list: … … 3349 3370 Helps get paste for poly function 3350 3371 3351 :param item: Gui param items 3352 :param value: the values for parameter ctrols 3353 """ 3354 is_array = False 3355 if len(value[1]) > 0: 3356 # Only for dispersion func.s 3357 try: 3358 item[7].SetValue(value[1]) 3359 selection = item[7].GetCurrentSelection() 3360 name = item[7].Name 3361 param_name = name.split('.')[0] 3362 dispersity = item[7].GetClientData(selection) 3363 disp_model = dispersity() 3364 # Only for array disp 3365 try: 3366 pd_vals = numpy.array(value[2]) 3367 pd_weights = numpy.array(value[3]) 3368 if len(pd_vals) > 0 and len(pd_vals) > 0: 3369 if len(pd_vals) == len(pd_weights): 3370 self._set_disp_array_cb(item=item) 3371 self._set_array_disp_model(name=name, 3372 disp=disp_model, 3373 values=pd_vals, 3374 weights=pd_weights) 3375 is_array = True 3376 except Exception: 3377 logging.error(traceback.format_exc()) 3378 if not is_array: 3379 self._disp_obj_dict[name] = disp_model 3380 self.model.set_dispersion(name, 3381 disp_model) 3382 self.state._disp_obj_dict[name] = \ 3383 disp_model 3384 self.model.set_dispersion(param_name, disp_model) 3385 self.state.values = self.values 3386 self.state.weights = self.weights 3387 self.model._persistency_dict[param_name] = \ 3388 [self.state.values, 3389 self.state.weights] 3390 3391 except Exception: 3392 logging.error(traceback.format_exc()) 3393 print "Error in BasePage._paste_poly_help: %s" % \ 3394 sys.exc_info()[1] 3395 3396 def _set_disp_array_cb(self, item): 3372 *item* is the parameter name 3373 3374 *value* depends on which parameter is being processed, and whether it 3375 has array polydispersity. 3376 3377 For parameters without array polydispersity: 3378 3379 parameter => ['FLOAT', ''] 3380 parameter.width => ['FLOAT', 'DISTRIBUTION', ''] 3381 parameter.npts => ['FLOAT', ''] 3382 parameter.nsigmas => ['FLOAT', ''] 3383 3384 For parameters with array polydispersity: 3385 3386 parameter => ['FLOAT', ''] 3387 parameter.width => ['FILENAME', 'array', [x1, ...], [w1, ...]] 3388 parameter.npts => ['FLOAT', ''] 3389 parameter.nsigmas => ['FLOAT', ''] 3390 """ 3391 # Do nothing if not setting polydispersity 3392 if len(value[1]) == 0: 3393 return 3394 3395 try: 3396 name = item[7].Name 3397 param_name = name.split('.')[0] 3398 item[7].SetValue(value[1]) 3399 selection = item[7].GetCurrentSelection() 3400 dispersity = item[7].GetClientData(selection) 3401 disp_model = dispersity() 3402 3403 if value[1] == 'array': 3404 pd_vals = numpy.array(value[2]) 3405 pd_weights = numpy.array(value[3]) 3406 if len(pd_vals) == 0 or len(pd_vals) != len(pd_weights): 3407 msg = ("bad array distribution parameters for %s" 3408 % param_name) 3409 raise ValueError(msg) 3410 self._set_disp_cb(True, item=item) 3411 self._set_array_disp_model(name=name, 3412 disp=disp_model, 3413 values=pd_vals, 3414 weights=pd_weights) 3415 else: 3416 self._set_disp_cb(False, item=item) 3417 self._disp_obj_dict[name] = disp_model 3418 self.model.set_dispersion(param_name, disp_model) 3419 self.state._disp_obj_dict[name] = disp_model.type 3420 # TODO: It's not an array, why update values and weights? 3421 self.model._persistency_dict[param_name] = \ 3422 [self.values, self.weights] 3423 self.state.values = self.values 3424 self.state.weights = self.weights 3425 3426 except Exception: 3427 logging.error(traceback.format_exc()) 3428 print "Error in BasePage._paste_poly_help: %s" % \ 3429 sys.exc_info()[1] 3430 3431 def _set_disp_cb(self, isarray, item): 3397 3432 """ 3398 3433 Set cb for array disp 3399 3434 """ 3400 item[0].SetValue(False) 3401 item[0].Enable(False) 3402 item[2].Enable(False) 3403 item[3].Show(False) 3404 item[4].Show(False) 3405 item[5].SetValue('') 3406 item[5].Enable(False) 3407 item[6].SetValue('') 3408 item[6].Enable(False) 3435 if isarray: 3436 item[0].SetValue(False) 3437 item[0].Enable(False) 3438 item[2].Enable(False) 3439 item[3].Show(False) 3440 item[4].Show(False) 3441 item[5].SetValue('') 3442 item[5].Enable(False) 3443 item[6].SetValue('') 3444 item[6].Enable(False) 3445 else: 3446 item[0].Enable() 3447 item[2].Enable() 3448 item[3].Show(True) 3449 item[4].Show(True) 3450 item[5].Enable() 3451 item[6].Enable() 3409 3452 3410 3453 def update_pinhole_smear(self): -
TabularUnified src/sas/sasgui/perspectives/fitting/fitpage.py ¶
ree4b3cb r6c382da 10 10 import math 11 11 import time 12 import traceback 12 13 13 14 from sasmodels.weights import MODELS as POLYDISPERSITY_MODELS … … 2058 2059 msg = "Error: This model state has missing or outdated " 2059 2060 msg += "information.\n" 2060 msg += "%s" % (sys.exc_value)2061 msg += traceback.format_exc() 2061 2062 wx.PostEvent(self._manager.parent, 2062 2063 StatusEvent(status=msg, info="error")) -
TabularUnified src/sas/sasgui/perspectives/fitting/media/fitting.rst ¶
rd85c194 r05829fb 18 18 19 19 Information on the SasView Optimisers <optimizer.rst> 20 20 21 Writing a Plugin <plugin.rst> -
TabularUnified src/sas/sasgui/perspectives/fitting/media/fitting_help.rst ¶
rb64b87c r05829fb 132 132 * By :ref:`Writing_a_Plugin` 133 133 134 *NB: Because of the way these options are implemented, it is not possible for them*135 *to use the polydispersity algorithms in SasView. Only models in the model library*136 *can do this. At the time of writing (Release 3.1.0) work is in hand to make it*137 *easier to add new models to the model library.*138 139 134 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 140 135 … … 163 158 the :ref:`Advanced` option. 164 159 160 *NB: "Fit Parameters" has been split into two sections, those which can be 161 polydisperse (shape and orientation parameters) and those which are not 162 (scattering length densities, for example).* 163 165 164 Sum|Multi(p1,p2) 166 165 ^^^^^^^^^^^^^^^^ … … 192 191 *Advanced Custom Model Editor*. 193 192 194 *NB: Unless you are confident about what you are doing, it is recommended that you* 195 *only modify lines denoted with the ## <----- comments!* 193 See :ref:`Writing_a_Plugin` for details on the plugin format. 194 195 *NB: Sum/Product models are still using the SasView 3.x model format. Unless 196 you are confident about what you are doing, it is recommended that you 197 only modify lines denoted with the ## <----- comments!* 196 198 197 199 When editing is complete, select *Run -> Compile* from the *Model Editor* menu bar. An … … 211 213 212 214 *NB: Custom models shipped with SasView cannot be removed in this way.* 213 214 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ215 216 .. _Writing_a_Plugin:217 218 Writing a Plugin219 ----------------220 221 Advanced users can write their own model in Python and save it to the the SasView222 *plugin_models* folder223 224 *C:\\Users\\[username]\\.sasview\\plugin_models* - (on Windows)225 226 in .py format. The next time SasView is started it will compile the plugin and add227 it to the list of *Customized Models*.228 229 It is recommended that existing plugin models be used as templates.230 215 231 216 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -
TabularUnified src/sas/sasgui/perspectives/fitting/pagestate.py ¶
r654e8e0 r6c382da 24 24 from xml.dom.minidom import parseString 25 25 from lxml import etree 26 27 import sasmodels.weights 26 28 27 29 import sas.sascalc.dataloader … … 474 476 value = content[1] 475 477 except Exception: 476 logging.error(traceback.format_exc()) 478 msg = "Report string expected 'name: value' but got %r"%line 479 logging.error(msg) 477 480 if name.count("State created"): 478 481 repo_time = "" + value … … 516 519 title_name = HEADER % title 517 520 except Exception: 518 logging.error(traceback.format_exc()) 521 msg = "While parsing 'data: ...'\n" 522 logging.error(msg + traceback.format_exc()) 519 523 if name == "model name ": 520 524 try: … … 531 535 q_range = CENTRE % q_name 532 536 except Exception: 533 logging.error(traceback.format_exc()) 537 msg = "While parsing 'Plotting Range: ...'\n" 538 logging.error(msg + traceback.format_exc()) 534 539 paramval = "" 535 540 for lines in param_string.split(":"): … … 711 716 # For self.values ={ disp_param_name: [vals,...],...} 712 717 # and for self.weights ={ disp_param_name: [weights,...],...} 713 value_list = {}714 718 for item in LIST_OF_MODEL_ATTRIBUTES: 715 719 element = newdoc.createElement(item[0]) … … 725 729 726 730 # Create doc for the dictionary of self._disp_obj_dic 727 for item in DISPERSION_LIST: 728 element = newdoc.createElement(item[0]) 729 value_list = getattr(self, item[1]) 730 for key, val in value_list.iteritems(): 731 value = repr(val) 731 for tagname, varname, tagtype in DISPERSION_LIST: 732 element = newdoc.createElement(tagname) 733 value_list = getattr(self, varname) 734 for key, value in value_list.iteritems(): 732 735 sub_element = newdoc.createElement(key) 733 736 sub_element.setAttribute('name', str(key)) … … 847 850 # Recover _disp_obj_dict from xml file 848 851 self._disp_obj_dict = {} 849 for item in DISPERSION_LIST: 850 # Get node 851 node = get_content("ns:%s" % item[0], entry) 852 for tagname, varname, tagtype in DISPERSION_LIST: 853 node = get_content("ns:%s" % tagname, entry) 852 854 for attr in node: 853 name = str(attr.get('name')) 854 val = attr.get('value') 855 value = val.split(" instance")[0] 856 disp_name = value.split("<")[1] 857 try: 858 # Try to recover disp_model object from strings 859 com = "from sas.models.dispersion_models " 860 com += "import %s as disp" 861 com_name = disp_name.split(".")[3] 862 exec com % com_name 863 disp_model = disp() 864 attribute = getattr(self, item[1]) 865 attribute[name] = com_name 866 except Exception: 867 logging.error(traceback.format_exc()) 855 parameter = str(attr.get('name')) 856 value = attr.get('value') 857 if value.startswith("<"): 858 try: 859 # <path.to.NamedDistribution object/instance...> 860 cls_name = value[1:].split()[0].split('.')[-1] 861 cls = getattr(sasmodels.weights, cls_name) 862 value = cls.type 863 except Exception: 864 logging.error("unable to load distribution %r for %s" 865 % (value, parameter)) 866 continue 867 _disp_obj_dict = getattr(self, varname) 868 _disp_obj_dict[parameter] = value 868 869 869 870 # get self.values and self.weights dic. if exists 870 for itemin LIST_OF_MODEL_ATTRIBUTES:871 node = get_content("ns:%s" % item[0], entry)871 for tagname, varname in LIST_OF_MODEL_ATTRIBUTES: 872 node = get_content("ns:%s" % tagname, entry) 872 873 dic = {} 873 874 value_list = [] 874 875 for par in node: 875 876 name = par.get('name') 876 values = par.text.split( '\n')877 values = par.text.split() 877 878 # Get lines only with numbers 878 879 for line in values: … … 882 883 except Exception: 883 884 # pass if line is empty (it happens) 884 logging.error(traceback.format_exc()) 885 msg = ("Error reading %r from %s %s\n" 886 % (line, tagname, name)) 887 logging.error(msg + traceback.format_exc()) 885 888 dic[name] = numpy.array(value_list) 886 setattr(self, item[1], dic)889 setattr(self, varname, dic) 887 890 888 891 def set_plot_state(self, figs, canvases): … … 1231 1234 1232 1235 except: 1233 logging.info("XML document does not contain fitting information.\n %s" % sys.exc_value) 1236 logging.info("XML document does not contain fitting information.\n" 1237 + traceback.format_exc()) 1234 1238 1235 1239 return state -
TabularUnified src/sas/sasgui/perspectives/pr/media/pr_help.rst ¶
rb64b87c r0391dae 15 15 *P(r)* is set to be equal to an expansion of base functions of the type 16 16 17 |bigphi|\_n(r) = 2.r.sin(|pi|\ .n.r/D_max) 17 .. math:: 18 \Phi_{n(r)} = 2 r sin(\frac{\pi n r}{D_{max}}) 18 19 19 The coefficient of each base function in the expansion is found by performing 20 The coefficient of each base function in the expansion is found by performing 20 21 a least square fit with the following fit function 21 22 22 |chi|\ :sup:`2` = |bigsigma|\ :sub:`i` [ I\ :sub:`meas`\ (Q\ :sub:`i`\ ) - I\ :sub:`th`\ (Q\ :sub:`i`\ ) ] :sup:`2` / (Error) :sup:`2` + Reg_term 23 .. math:: 23 24 24 where I\ :sub:`meas`\ (Q) is the measured scattering intensity and 25 I\ :sub:`th`\ (Q) is the prediction from the Fourier transform of the *P(r)* 26 expansion. 25 \chi^2=\frac{\sum_i (I_{meas}(Q_i)-I_{th}(Q_i))^2}{error^2}+Reg\_term 26 27 27 28 The *Reg_term* term is a regularization term set to the second derivative 29 d\ :sup:`2`\ *P(r)* / dr\ :sup:`2` integrated over *r*. It is used to produce a 30 smooth *P(r)* output. 28 where $I_{meas}(Q_i)$ is the measured scattering intensity and $I_{th}(Q_i)$ is 29 the prediction from the Fourier transform of the *P(r)* expansion. 30 31 The $Reg\_term$ term is a regularization term set to the second derivative 32 $d^2P(r)/d^2r$ integrated over $r$. It is used to produce a smooth *P(r)* output. 31 33 32 34 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ … … 45 47 system. 46 48 49 P(r) inversion requires that the background be perfectly subtracted. This is 50 often difficult to do well and thus many data sets will include a background. 51 For those cases, the user should check the "estimate background" box and the 52 module will do its best to estimate it. 53 54 The P(r) module is constantly computing in the background what the optimum 55 *number of terms* should be as well as the optimum *regularization constant*. 56 These are constantly updated in the buttons next to the entry boxes on the GUI. 57 These are almost always close and unless the user has a good reason to choose 58 differently they should just click on the buttons to accept both. {D_max} must 59 still be set by the user. However, besides looking at the output, the user can 60 click the explore button which will bring up a graph of chi^2 vs Dmax over a 61 range around the current Dmax. The user can change the range and the number of 62 points to explore in that range. They can also choose to plot several other 63 parameters as a function of Dmax including: I0, Rg, Oscillation parameter, 64 background, positive fraction, and 1-sigma positive fraction. 65 47 66 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 48 67 … … 55 74 .. ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 56 75 57 .. note:: This help document was last changed by Steve King, 01May201576 .. note:: This help document was last modified by Paul Butler, 05 September, 2016 -
TabularUnified test/sasdataloader/test/utest_ascii.py ¶
rb699768 r7d94915 94 94 f = self.loader.load("ascii_test_6.txt") 95 95 # The length of the data is 5 96 self.assertEqual(len(f.x), 4) 97 self.assertEqual(f.x[0],0.013534) 98 self.assertEqual(f.x[3],0.022254) 96 self.assertEqual(f, None) 99 97 100 98 if __name__ == '__main__': -
TabularUnified Vagrantfile ¶
r96032b3 r601b93d 21 21 # Every Vagrant development environment requires a box. You can search for 22 22 # boxes at https://atlas.hashicorp.com/search. 23 config.vm.box = "ubuntu1404" 24 config.vm.box_url = "https://github.com/hnakamur/packer-templates/releases/download/v1.0.2/ubuntu-14-04-x64-virtualbox.box" 23 config.vm.box = "ubuntu/trusty64" 25 24 #config.vm.box = "fedora19" 26 25 #config.vm.box_url = "https://dl.dropboxusercontent.com/u/86066173/fedora-19.box"
Note: See TracChangeset
for help on using the changeset viewer.