Search Results

Search found 5205 results on 209 pages for 'extra'.

Page 186/209 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Reorganizing MySQL table to multiple rows by timestamp.

    - by Ben Burleson
    OK MySQL Wizards: I have a table of position data from multiple probes defined as follows: +----------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+----------+------+-----+---------+-------+ | time | datetime | NO | | NULL | | | probe_id | char(3) | NO | | NULL | | | position | float | NO | | NULL | | +----------+----------+------+-----+---------+-------+ A simple select outputs something like this: +---------------------+----------+----------+ | time | probe_id | position | +---------------------+----------+----------+ | 2010-05-05 14:16:42 | 00A | 0.0045 | | 2010-05-05 14:16:42 | 00B | 0.0005 | | 2010-05-05 14:16:42 | 00C | 0.002 | | 2010-05-05 14:16:42 | 01A | 0 | | 2010-05-05 14:16:42 | 01B | 0.001 | | 2010-05-05 14:16:42 | 01C | 0.0025 | | 2010-05-05 14:16:43 | 00A | 0.0045 | | 2010-05-05 14:16:43 | 00B | 0.0005 | | 2010-05-05 14:16:43 | 00C | 0.002 | | 2010-05-05 14:16:43 | 01A | 0 | | . | . | . | | . | . | . | | . | . | . | +---------------------+----------+----------+ However, I'd like to output something like this: +---------------------+--------+--------+-------+-----+-------+--------+ | time | 00A | 00B | 00C | 01A | 01B | 01C | +---------------------+--------+--------+-------+-----+-------+--------+ | 2010-05-05 14:16:42 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:43 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:44 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:45 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:46 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:47 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | . | . | . | . | . | . | . | | . | . | . | . | . | . | . | | . | . | . | . | . | . | . | +---------------------+--------+--------+-------+-----+-------+--------+ Ideally, the different probe position columns are dynamically generated based on data in the table. Is this possible, or am I pulling my hair out for nothing? I've tried GROUP BY time with GROUP_CONCAT that roughly gets the data out, but I can't separate that output into probe_id columns. mysql SELECT time, GROUP_CONCAT(probe_id), GROUP_CONCAT(position) FROM MG41 GROUP BY time LIMIT 10; +---------------------+-------------------------+------------------------------------+ | time | GROUP_CONCAT(probe_id) | GROUP_CONCAT(position) | +---------------------+-------------------------+------------------------------------+ | 2010-05-05 14:16:42 | 00A,00B,00C,01A,01B,01C | 0.0045,0.0005,0.002,0,0.001,0.0025 | | 2010-05-05 14:16:43 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:44 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:45 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:46 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:47 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:48 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:49 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:50 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:51 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | +---------------------+-------------------------+------------------------------------+

    Read the article

  • VB6 ADO Command to SQL Server

    - by Emtucifor
    I'm getting an inexplicable error with an ADO command in VB6 run against a SQL Server 2005 database. Here's some code to demonstrate the problem: Sub ADOCommand() Dim Conn As ADODB.Connection Dim Rs As ADODB.Recordset Dim Cmd As ADODB.Command Dim ErrorAlertID As Long Dim ErrorTime As Date Set Conn = New ADODB.Connection Conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=database;Data Source=server" Conn.CursorLocation = adUseClient Conn.Open Set Rs = New ADODB.Recordset Rs.CursorType = adOpenStatic Rs.LockType = adLockReadOnly Set Cmd = New ADODB.Command With Cmd .Prepared = False .CommandText = "ErrorAlertCollect" .CommandType = adCmdStoredProc .NamedParameters = True .Parameters.Append .CreateParameter("@ErrorAlertID", adInteger, adParamOutput) .Parameters.Append .CreateParameter("@CreateTime", adDate, adParamOutput) Set .ActiveConnection = Conn Rs.Open Cmd ErrorAlertID = .Parameters("@ErrorAlertID").Value ErrorTime = .Parameters("@CreateTime").Value End With Debug.Print Rs.State ' Shows 0 - Closed Debug.Print Rs.RecordCount ' Of course this fails since the recordset is closed End Sub So this code was working not too long ago but now it's failing on the last line with the error: Run-time error '3704': Operation is not allowed when the object is closed Why is it closed? I just opened it and the SP returns rows. I ran a trace and this is what the ADO library is actually submitting to the server: declare @p1 int set @p1=1 declare @p2 datetime set @p2=''2010-04-22 15:31:07:770'' exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running this as a separate batch from my query editor yields: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '2010'. Of course there's an error. Look at the double single quotes in there. What the heck could be causing that? I tried using adDBDate and adDBTime as data types for the date parameter, and they give the same results. When I make the parameters adParamInputOutput, then I get this: declare @p1 int set @p1=default declare @p2 datetime set @p2=default exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running that as a separate batch yields: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'default'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'default'. What the heck? SQL Server doesn't support this kind of syntax. You can only use the DEFAULT keyword in the actual SP execution statement. I should note that removing the extra single quotes from the above statement makes the SP run fine. ... Oh my. I just figured it out. I guess it's worth posting anyway.

    Read the article

  • Magento set Store Id - customer login - but still logged out

    - by user3564050
    I've got an overridden AccountController in which i set the current store to an other as currently running (example: Customer is in website default and store default, going to login page, click login, my loginPostAction sets the store to id "2" (on website 2) and then executes the parent code loginPostAction. The store is set, of course, but after the login and the redirect to home, the customer is not logged in anymore... Customer-sendlogindata-myaccountcontroller sets store-original account controller logs in without errors (cause $session customer is set)-redirect to home-customer is not logged in anymore... i set the store with Mage::app()-setCurrentStore($id); . And in index.php i've got an extra where the store is set to the right id (2) too and this works... but the customer is not logged in anymore.. is that an issue with the session cause different websites ? I don't want to globally share customer.. each website has his own customers, but every customer has to be able to login on default store. AccountController.php overridden: public $Website_Ids = array( array("code" => "gerstore", "id" => "3", "website" => "ger"), array("code" => "ukstore", "id" => "2", "website" => "uk"), array("code" => "esstore", "id" => "4", "website" => "es"), array("code" => "frstore", "id" => "5", "website" => "fr") ); public function loginPostAction() { $login = $this->getRequest()->get('login'); if(isset($login['username'])) { $found = null; foreach($this->Website_Ids as $WebsiteId) { $customer = Mage::getModel('customer/customer'); $customer->setWebsiteId($WebsiteId['id']); $customer->loadByEmail($login['username']); if(count($customer->getData()) > 0) { $found = $WebsiteId; } } if($found != null && Mage::app()->getStore()->getId() != $found['id']) { /* found, so set currentstore to id */ Mage::app()->setCurrentStore($found['id']); $_SESSION['current_store_b2b'] = $found; } /* not found, doesn't matter cause mage login exception handling */ } parent::loginPostAction(); } Index.php : session_start(); $session = $_SESSION['current_store_b2b']; if($session != null || $session != "") { Mage::app()->setCurrentStore($session['id']); Mage::run($session['code'], 'store'); } else { /* Store or website code */ $mageRunCode = isset($_SERVER['MAGE_RUN_CODE']) ? $_SERVER['MAGE_RUN_CODE'] : ''; /* Run store or run website */ $mageRunType = isset($_SERVER['MAGE_RUN_TYPE']) ? $_SERVER['MAGE_RUN_TYPE'] : 'store'; Mage::run($mageRunCode, $mageRunType); } Whats the matter ? Thanks.

    Read the article

  • Converting OpenGL co-ordinates to lower UIView (and UIImagePickerController)

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); NSString *path = [[NSBundle mainBundle] pathForResource:@"plane" ofType:@"obj"]; OpenGLWaveFrontObject *theObject = [[OpenGLWaveFrontObject alloc] initWithPath:path]; Vertex3D position; position.z = -8.0; position.y = 3.0; position.x = 2.0; theObject.currentPosition = position; self.plane = theObject; [theObject release]; } (void)drawView:(GLView*)view; { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // the coordinates of the rectangle are // myRect.x, myRect.y, myRect.width, myRect.height // Do I need to use a matrix to translate the currentPosition of the // 3-D model so it is drawn within myRect? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; }

    Read the article

  • Making a JQuery tooltip retrieve a new value every time the mouse moves.

    - by Micheal Smith
    As i am developing an application that makes use of a tooltip that will display a different value when the user moves the mouse. The user mouses over a table cell and the application then generates a number, the further right the cursor moves in the cell, the higher the value increases. I have created a tooltip that runs and when the cursor mouses over the cell, it does indeed show the correct value. But, when the i move the mouse, it does not show the new value but just the older one. I need to know how to make it update everytime the mouse moves or the value of a variable changes, Any ideas for the problem? <table> <tr id="mon_Section"> <td id="day_Title">Monday</td> <td id="mon_Row"></td> </tr> </table> Below is the document.ready function that calls my function: $(document).ready(function() { $("#mon_Row").mousemove(calculate_Time); }); Below is the function: <script type="text/javascript"> var mon_Pos = 0; var hour = 0; var minute = 0; var orig = 0; var myxpos = 0; function calculate_Time (event) { myxpos = event.pageX; myxpos = myxpos-194; if(myxpos<60) { orig = myxpos; $('#mon_Row').attr("title", orig); } if (myxpos>=60 && myxpos<120) { orig=myxpos; $('#mon_Row').attr("title", orig); } if (myxpos>=120 && myxpos<180) { orig=myxpos; $('#mon_Row').attr("title", orig); Inside the function is the code to generate the tooltip: $('#mon_Row').each(function() { $(this).qtip( { content: { text: false }, position: 'topRight', hide: { fixed: true // Make it fixed so it can be hovered over }, style: { padding: '5px 15px', // Give it some extra padding name: 'dark' // And style it with the preset dark theme } }); }); I know that a new value is being assigned to the cells title attribute because it will display inside the standard small tooltip that a browser will display. The JQuery tooltip will not grab the new value and display it, only the variables initial value when it was called.

    Read the article

  • How do I use ViewScripts on Zend_Form File Elements?

    - by Sonny
    I am using this ViewScript for my standard form elements: <div class="field" id="field_<?php echo $this->element->getId(); ?>"> <?php if (0 < strlen($this->element->getLabel())) : ?> <?php echo $this->formLabel($this->element->getName(), $this->element->getLabel());?> <?php endif; ?> <span class="value"><?php echo $this->{$this->element->helper}( $this->element->getName(), $this->element->getValue(), $this->element->getAttribs() ) ?></span> <?php if (0 < $this->element->getMessages()->length) : ?> <?php echo $this->formErrors($this->element->getMessages()); ?> <?php endif; ?> <?php if (0 < strlen($this->element->getDescription())) : ?> <span class="hint"><?php echo $this->element->getDescription(); ?></span> <?php endif; ?> </div> Trying to use that ViewScript alone results in an error: Exception caught by form: No file decorator found... unable to render file element Looking at this FAQ revealed part of my problem, and I updated my form element decorators like this: 'decorators' => array( array('File'), array('ViewScript', array('viewScript' => 'form/field.phtml')) ) Now it's rendering the file element twice, once within my view script, and extra elements with the file element outside my view script: <input type="hidden" name="MAX_FILE_SIZE" value="8388608" id="MAX_FILE_SIZE" /> <input type="hidden" name="UPLOAD_IDENTIFIER" value="4b5f7335a55ee" id="progress_key" /> <input type="file" name="upload_file" id="upload_file" /> <div class="field" id="field_upload_file"> <label for="upload_file">Upload File</label> <span class="value"><input type="file" name="upload_file" id="upload_file" /></span> </div> Any ideas on how to handle this properly with a ViewScript?

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • Convert NSData into Hex NSString

    - by Dawson
    With reference to the following question: Convert NSData into HEX NSSString I have solved the problem using the solution provided by Erik Aigner which is: NSData *data = ...; NSUInteger capacity = [data length] * 2; NSMutableString *stringBuffer = [NSMutableString stringWithCapacity:capacity]; const unsigned char *dataBuffer = [data bytes]; NSInteger i; for (i=0; i<[data length]; ++i) { [stringBuffer appendFormat:@"%02X", (NSUInteger)dataBuffer[i]]; } However, there is one small problem in that if there are extra zeros at the back, the string value would be different. For eg. if the hexa data is of a string @"3700000000000000", when converted using a scanner to integer: unsigned result = 0; NSScanner *scanner = [NSScanner scannerWithString:stringBuffer]; [scanner scanHexInt:&result]; NSLog(@"INTEGER: %u",result); The result would be 4294967295, which is incorrect. Shouldn't it be 55 as only the hexa 37 is taken? So how do I get rid of the zeros? EDIT: (In response to CRD) Hi, thanks for clarifying my doubts. So what you're doing is to actually read the 64-bit integer directly from a byte pointer right? However I have another question. How do you actually cast NSData to a byte pointer? To make it easier for you to understand, I'll explain what I did originally. Firstly, what I did was to display the data of the file which I have (data is in hexadecimal) NSData *file = [NSData dataWithContentsOfFile:@"file path here"]; NSLog(@"Patch File: %@",file); Output: Next, what I did was to read and offset the first 8 bytes of the file and convert them into a string. // 0-8 bytes [file seekToFileOffset:0]; NSData *b = [file readDataOfLength:8]; NSUInteger capacity = [b length] * 2; NSMutableString *stringBuffer = [NSMutableString stringWithCapacity:capacity]; const unsigned char *dataBuffer = [b bytes]; NSInteger i; for (i=0; i<[b length]; ++i) { [stringBuffer appendFormat:@"%02X", (NSUInteger)dataBuffer[i]]; } NSLog(@"0-8 bytes HEXADECIMAL: %@",stringBuffer); As you can see, 0x3700000000000000 is the next 8 bytes. The only changes I would have to make to access the next 8 bytes would be to change the value of SeekFileToOffset to 8, so as to access the next 8 bytes of data. All in all, the solution you gave me is useful, however it would not be practical to enter the hexadecimal values manually. If formatting the bytes as a string and then parsing them is not the way to do it, then how do I access the first 8 bytes of the data directly and cast them into a byte pointer?

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

  • Ruby: Parse, replace, and evaluate a string formula

    - by Swartz
    I'm creating a simple Ruby on Rails survey application for a friend's psychological survey project. So we have surveys, each survey has a bunch of questions, and each question has one of the options participants can choose from. Nothing exciting. One of the interesting aspects is that each answer option has a score value associated with it. And so for each survey a total score needs to be calculated based on these values. Now my idea is instead of hard-coding calculations is to allow user add a formula by which the total survey score will be calculated. Example formulas: "Q1 + Q2 + Q3" "(Q1 + Q2 + Q3) / 3" "(10 - Q1) + Q2 + (Q3 * 2)" So just basic math (with some extra parenthesis for clarity). The idea is to keep the formulas very simple such that anyone with basic math can enter them without resolving to some fancy syntax. My idea is to take any given formula and replace placeholders such as Q1, Q2, etc with the score values based on what the participant chooses. And then eval() the newly formed string. Something like this: f = "(Q1 + Q2 + Q3) / 2" # some crazy formula for this survey values = {:Q1 => 1, :Q2 => 2, :Q3 => 2} # values for substitution result = f.gsub(/(Q\d+)/) {|m| values[$1.to_sym] } # string to be eval()-ed eval(result) So my questions are: Is there a better way to do this? I'm open to any suggestions. How to handle formulas where not all placeholders were successfully replaced (e.g. one question wasn't answered)? Ex: {:Q3 = 2} wasn't in values hash? My idea is to rescue eval()... any thoughts? How to get proper result? Should be 2.5, but due to integer arithmetic, it will truncate to 2. I can't expect people who provide the correct formula (e.g. / 2.0 ) to understand this nuance. I do not expect this, but how to best protect eval() from abuse (e.g. bad formula, manipulated values coming in)? Thank you!

    Read the article

  • centering a div without setting width

    - by Daniel
    Is there a way to do this? When using navigation that can change the number of items often, it would be nice not having to calculate the with and updating the css, but just having a solution that works. if that's impossible (I'm assuming it is) can anyone point me to a javascript that can do this? edit re: provide code some code basically I'm working with, what I think is, the most typical setup <div class="main"> <div class="nav"> <ul> <li>short title</li> <li>Item 3 Long title</li> <li>Item 4 Long title</li> <li>Item 5 Long title</li> <li>Item 6 Extra Long title</li> </ul> </div> </div> edit .main { width:100%; text-align:center; } .nav { margin:0 auto; } .nav ul li { display:inline; text-align:left; } the issue I've found with this/these solutions is that the content is nudged to the right adding some right padding (of 40px) seems to fix this across the browsers I'm checking on (O FF IE). .nav { margin:0 auto; padding-right:40px; } I don't know where this value is coming from though and why 40px fixes this. Does anyone know where this is coming from? it's not a margin or padding but no matter what I do the first about 40px can not be used for placement. Maybe it's the ul or li that's adding this. I've had a look at the display:table-cell way of doing this, but there's that complication with IE and it still has the same issue as the other solution edit (final) okay I've tried some things in regard to the indent. I've reset all padding to 0 *{padding:0;} that fixed it, and I don't need to offset the padding (I think I'll leave my whole process up so if anyone comes across this, it'll save them some time) thanks for the comments and replies

    Read the article

  • Soften a colour border, maybe with a gradient, programmatically.

    - by ProfK
    I have a narrow header in corporate colour, bright red, with the content below on a just-off-white background. Ive tried softening the long line where these colours meet using border type divs with intermediate backgrounds, but I think I need the original type curved gradient 'area transitions'. I could copy the 1024px wide, and too narrow (vertically), header gif from their web site, and chop it up for eight border images, but that seems clumsy, and I'm looking for something I can apply anywhere, without needing images. I am able to do round borders in the x-y plane, but I'm curious as to how I can apply a gradient to any chosen colour transition. The extra divs I'm using as border elements above and below '#top-section' arose when I was toying with making many divs for one bordered element. This seemed the ultimate in border manipulation, sans code, but very tedious to spec in CSS and lay out a new border for each bordered element. <div id="topsection"> <div style="float: right; width: 300px; padding-right: 5px;"> <div style="font-size: small; text-align: right;"> Provantage Media Management System</div> <div style="font-size: x-small; text-align: right;"> <asp:LoginStatus ID="loginStatus" runat="server" LoginText="Log in" LogoutText="Log out" /> </div> </div> <span style="padding-left: 15px;">Main Menu</span><span id="content-title"> <asp:ContentPlaceHolder ID="titlePlaceHolder" runat="server"> </asp:ContentPlaceHolder> </span> <div style="background-color: white; height: 2px;"> </div> <div style="background-color: white; height: 3px;"></div> And the CSS: #topsection { background-color: #EB2728; color: white; height: 35px; font-size: large; font-weight: bold; }

    Read the article

  • Why does firefox round-trip to the server to determine whether my files are modifed?

    - by erikkallen
    I have some static content on my web site that I have set up caching for (using Asp.NET MVC). According to Firebug, the first time I open the page, Firefox sends this request: GET /CoreContent/Core.css?asm=0.7.3614.34951 Host: 127.0.0.1:3916 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729) Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:3916/Edit/1/101 Cookie: .ASPXAUTH=52312E5A802C1A079E2BA29AA2BFBC5A38058977B84452D62ED52855D4164659B4307661EC73A307BFFB2ED3871C67CB3A9AAFDB3A75A99AC0A21C63A6AADE9A11A7138C672E75125D9FF3EFFBD9BF62 Pragma: no-cache Cache-Control: no-cache Which my server replies to with this: Server: ASP.NET Development Server/9.0.0.0 Date: Mon, 23 Nov 2009 18:44:41 GMT X-AspNet-Version: 2.0.50727 X-AspNetMvc-Version: 1.0 Cache-Control: public, max-age=31535671 Expires: Tue, 23 Nov 2010 18:39:12 GMT Last-Modified: Mon, 23 Nov 2009 18:39:12 GMT Vary: * Content-Type: text/css Content-Length: 15006 Connection: Close So far, so good. However, if I refresh Firefox (not a cache-clearing refresh, just a normal one), during that refresh cycle Firefox will once again go to the server with this request: GET /CoreContent/Core.css?asm=0.7.3614.34951 Host: 127.0.0.1:3916 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729) Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:3916/Edit/1/101 Cookie: .ASPXAUTH=52312E5A802C1A079E2BA29AA2BFBC5A38058977B84452D62ED52855D4164659B4307661EC73A307BFFB2ED3871C67CB3A9AAFDB3A75A99AC0A21C63A6AADE9A11A7138C672E75125D9FF3EFFBD9BF62 If-Modified-Since: Mon, 23 Nov 2009 18:39:20 GMT Cache-Control: max-age=0 to which my server responds 304 Not Modified. Why does Firefox issue this second request? In the first response, I said that the cache does not expire for a year (I intend to use query parameters whenever things change). Do I have to add another response header to prevent this extra roundtrip? Edit: It does not matter whether I press refresh, or whether I go to the page again (or a different URL, which references the same external files). Firefox does the same again. Also, I don't claim this to be a bug in FF, I just wonder if there is another header I can set which means "This document will never change, don't bother me again".

    Read the article

  • C#: My callback function gets called twice for every Sent Request

    - by Madi D.
    I've Got a program that uploads/downloads files into an online server,Has a callback to report progress and log it into a textfile, The program is built with the following structure: public void Upload(string source, string destination) { //Object containing Source and destination to pass to the threaded function KeyValuePair<string, string> file = new KeyValuePair<string, string>(source, destination); //Threading to make sure no blocking happens after calling upload Function Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.TUpload)); t.Start(file); } private void TUpload(object fileInfo) { KeyValuePair<string, string> file = (KeyValuePair<string, string>)fileInfo; /* Some Magic goes here,Checking The file and Authorizing Upload */ var ftiObject = new FtiObject () { FileNameOnHDD = file.Key, DestinationPath = file.Value, //Has more data used for calculations. }; //Threading to make sure progress gets callback gets called. Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.UploadOP)); t.Start(ftiObject); //Signal used to stop progress untill uploadCompleted is called. uploadChunkDoneSignal.WaitOne(); /* Some Extra Code */ } private void UploadOP(object ftiSentObject) { FtiObject ftiObject = (FtiObject)ftiSentObject; /* Some useless code to create the uri and prepare the ftiObject. */ // webClient.UploadFileAsync will open a thread that // will upload the file and report // progress/complete using registered callback functions. webClient.UploadFileAsync(uri, "PUT", ftiObject.FileNameOnHDD, ftiObject); } I got a callback that is registered to the Webclient's UploadProgressChanged event , however it is getting called twice per sent request. void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { FtiObject ftiObject = (FtiObject )e.UserState; Logger.log(ftiObject.FileNameOnHDD, (double)e.BytesSent ,e.TotalBytesToSend); } Log Output: Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Etc... I am watching the Network Traffic using a watcher, and only 1 request is being sent. Some how i cant Figure out why the callback is being called twice, my doubt was that the callback is getting fired by each thread opened(the main Upload , and TUpload), however i dont know how to test if thats the cause. Note: The reason behind the many /**/ Comments is to indicate that the functions do more than just opening threads, and threading is being used to make sure no blocking occurs (there a couple of "Signal.WaitOne()" around the code for synchronization)

    Read the article

  • Should I skip authorization, with CanCan, of an action that instantiates a resource?

    - by irkenInvader
    I am writing a web app to pick random lists of cards from larger, complete sets of cards. I have a Card model and a CardSet model. Both models have a full RESTful set of 7 actions (:index, :new, :show, etc). The CardSetsController has an extra action for creating random sets: :random. # app/models/card_set.rb class CardSet < ActiveRecord::Base belongs_to :creator, :class_name => "User" has_many :memberships has_many :cards, :through => :memberships # app/models/card.rb class Card < ActiveRecord::Base belongs_to :creator, :class_name => "User" has_many :memberships has_many :card_sets, :through => :memberships I have added Devise for authentication and CanCan for authorizations. I have users with an 'editor' role. Editors are allowed to create new CardSets. Guest users (Users who have not logged in) can only use the :index and :show actions. These authorizations are working as designed. Editors can currently use both the :random and the :new actions without any problems. Guest users, as expected, cannot. # app/controllers/card_sets_controller.rb class CardSetsController < ApplicationController before_filter :authenticate_user!, :except => [:show, :index] load_and_authorize_resource I want to allow guest users to use the :random action, but not the :new action. In other words, they can see new random sets, but not save them. The "Save" button on the :random action's view is hidden (as designed) from the guest users. The problem is, the first thing the :random action does is build a new instance of the CardSet model to fill out the view. When cancan tries to load_and_authorize_resource a new CardSet, it throws a CanCan::AccessDenied exception. Therefore, the view never loads and the guest user is served a "You need to sign in or sign up before continuing" message. # app/controllers/card_sets_controllers.rb def random @card_set = CardSet.new( :name => "New Set of 10", :set_type => "Set of 10" ) I realize that I can tell load_and_authorize_resource to skip the :random action by passing :except => :random to the call, but that just feels "wrong" for some reason. What's the "right" way to do this? Should I create the new random set without instantiating a new CardSet? Should I go ahead and add the exception?

    Read the article

  • Stepping into Ruby Meta-Programming: Generating proxy methods for multiple internal methods

    - by mstksg
    Hi all; I've multiply heard Ruby touted for its super spectacular meta-programming capabilities, and I was wondering if anyone could help me get started with this problem. I have a class that works as an "archive" of sorts, with internal methods that process and output data based on an input. However, the items in the archive in the class itself are represented and processed with integers, for performance purposes. The actual items outside of the archive are known by their string representation, which is simply number_representation.to_s(36). Because of this, I have hooked up each internal method with a "proxy method" that converts the input into the integer form that the archive recognizes, runs the internal method, and converts the output (either a single other item, or a collection of them) back into strings. The naming convention is this: internal methods are represented by _method_name; their corresponding proxy method is represented by method_name, with no leading underscore. For example: class Archive ## PROXY METHODS ## ## input: string representation of id's ## output: string representation of id's def do_something_with id result = _do_something_with id.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_pair id_1,id_2 result = _do_something_with_pair id_1.to_i(36), id_2.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_these ids result = _do_something_with_these ids.map { |n| n.to_i(36) } return nil if result == nil return result.to_s(36) end def get_many_from id result = _get_many_from id return nil if result == nil # no sparse arrays returned return result.map { |n| n.to_s(36) } end ## INTERNAL METHODS ## ## input: integer representation of id's ## output: integer representation of id's def _do_something_with id # does something with one integer-represented id, # returning an id represented as an integer end def do_something_with_pair id_1,id_2 # does something with two integer-represented id's, # returning an id represented as an integer end def _do_something_with_these ids # does something with multiple integer ids, # returning an id represented as an integer end def _get_many_from id # does something with one integer-represented id, # returns a collection of id's represented as integers end end There are a couple of reasons why I can't just convert them if id.class == String at the beginning of the internal methods: These internal methods are somewhat computationally-intensive recursive functions, and I don't want the overhead of checking multiple times at every step There is no way, without adding an extra parameter, to tell whether or not to re-convert at the end I want to think of this as an exercise in understanding ruby meta-programming Does anyone have any ideas? edit The solution I'd like would preferably be able to take an array of method names @@PROXY_METHODS = [:do_something_with, :do_something_with_pair, :do_something_with_these, :get_many_from] iterate through them, and in each iteration, put out the proxy method. I'm not sure what would be done with the arguments, but is there a way to test for arguments of a method? If not, then simple duck typing/analogous concept would do as well.

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • how to send parameters to a web Services via SOAP?

    - by Alejandra Meraz
    Before I start: I'm programming for Iphone, using objective C. I have already implemented a call to a web service function using NSURLRequest and NSURLConnection and SOAP. The function then returns a XML with the info I need. The code is as follows: NSString *soapMessage = [NSString stringWithFormat: @"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n" "<soap:Body>\n" "<function xmlns=\"http://tempuri.org/\" />\n" "</soap:Body>\n" "</soap:Envelope>\n"]; NSURL *url = [NSURL URLWithString:@"http://myHost.com/myWebService/service.asmx"]; //the url to the WSDL NsMutableURLRequest theRequest = [[NSMutableURLRequest alloc] initWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d",[soapMessage length]]; [theRequest addValue:@"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue:msgLength forHTTPHeaderField:@"Content-Lenght"]; [theRequest setHTTPMethod:@"POST"]; [theRequest addValue:@"myhost.com" forHTTPHeaderField:@"Host"]; [theRequest addValue:@"http://tempuri.org/function" forHTTPHeaderField:@"SOAPAction"]; [theRequest setHTTPBody:[soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; I basically copy and modified the soap request the web service gave as an example. i also implemented the methods didRecieveResponse didRecieveAuthenticationChallenge didRecievedData didFailWithError connectionDidFinishLoading. And it works perfectly. Now I need to send 2 parameters to the function: "location" and "module". I tried modifying the soapMessage like this: NSString *soapMessage = [NSString stringWithFormat: @"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n" "<soap:Body xmlns=\"http://tempuri.org/\" />\n" "<m:GetMonitorList>\n" "<m:location>USA</m:location>\n" "<m:module>DEVELOPMENT</m:module>\n" "</m:GetMonitorList>\n" "</soap:Body>\n" "</soap:Envelope>\n"]; But is not working...any thoughts how should I modify it? Extra info: it seems to be working... kind of. But the webservice return nothing. During the connection, the method didReceiveResponse execute once and the didFinishLoading method executes as well. But not even once the method didReceiveData. I wonder if, even though there is no USA locations, it will still send at least something? is there a way to know which are the parameters the function is waiting for? I don't have access to the source of the webservice but i can access the WSDL.

    Read the article

  • How to reduce redundant code when adding new c++0x rvalue reference operator overloads

    - by Inverse
    I am adding new operator overloads to take advantage of c++0x rvalue references, and I feel like I'm producing a lot of redundant code. I have a class, tree, that holds a tree of algebraic operations on double values. Here is an example use case: tree x = 1.23; tree y = 8.19; tree z = (x + y)/67.31 - 3.15*y; ... std::cout << z; // prints "(1.23 + 8.19)/67.31 - 3.15*8.19" For each binary operation (like plus), each side can be either an lvalue tree, rvalue tree, or double. This results in 8 overloads for each binary operation: // core rvalue overloads for plus: tree operator +(const tree& a, const tree& b); tree operator +(const tree& a, tree&& b); tree operator +(tree&& a, const tree& b); tree operator +(tree&& a, tree&& b); // cast and forward cases: tree operator +(const tree& a, double b) { return a + tree(b); } tree operator +(double a, const tree& b) { return tree(a) + b; } tree operator +(tree&& a, double b) { return std::move(a) + tree(b); } tree operator +(double a, tree&& b) { return tree(a) + std::move(b); } // 8 more overloads for minus // 8 more overloads for multiply // 8 more overloads for divide // etc which also has to be repeated in a way for each binary operation (minus, multiply, divide, etc). As you can see, there are really only 4 functions I actually need to write; the other 4 can cast and forward to the core cases. Do you have any suggestions for reducing the size of this code? PS: The class is actually more complex than just a tree of doubles. Reducing copies does dramatically improve performance of my project. So, the rvalue overloads are worthwhile for me, even with the extra code. I have a suspicion that there might be a way to template away the "cast and forward" cases above, but I can't seem to think of anything.

    Read the article

  • Primary language - C++/Qt, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • Delphi - Read File To StringList, then delete and write back to file.

    - by Jkraw90
    I'm currently working on a program to generate the hashes of files, in Delphi 2010. As part of this I have a option to create User Presets, e.g. pre-defined choice of hashing algo's which the user can create/save/delete. I have the create and load code working fine. It uses a ComboBox and loads from a file "fhpre.ini", inside this file is the users presets stored in format of:- PresetName PresetCode (a 12 digit string using 0 for don't hash and 1 for do) On application loading it loads the data from this file into the ComboBox and an Array with the ItemIndex of ComboBox matching the corrisponding correct string of 0's and 1's in the Array. Now I need to implement a feature to have the user delete a preset from the list. So far my code is as follows, procedure TForm1.Panel23Click(Sender : TObject); var fil : textfile; contents : TStringList; x,i : integer; filline : ansistring; filestream : TFileStream; begin //Start Procedure //Load data into StringList contents := TStringList.Create; fileStream := TFileStream.Create((GetAppData+'\RFA\fhpre.ini'), fmShareDenyNone); Contents.LoadFromStream(fileStream); fileStream.Destroy(); //Search for relevant Preset i := 0; if ComboBox4.Text <> Contents[i] then begin Repeat i := i + 1; Until ComboBox4.Text = Contents[i]; end; contents.Delete(i); //Delete Relevant Preset Name contents.Delete(i); //Delete Preset Digit String //Write StringList back to file. AssignFile(fil,(GetAppData+'\RFA\fhpre.ini')); ReWrite(fil); for i := 0 to Contents.Count -1 do WriteLn(Contents[i]); CloseFile(fil); Contents.Free; end; However if this is run, I get a 105 error when it gets to the WriteLn section. I'm aware that the code isn't great, for example doesn't have checks for presets with same name, but that will come, I want to get the base code working first then can tweak and add extra checks etc. Any help would be appreciated.

    Read the article

  • Mysql select - improve performances

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated. Edit: Here is the explain +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | products_loans | range | CENA_OD,CENA_DO,KOD | KOD | 92 | NULL | 190158 | Using where | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ I have tried the combined index and it improved the performance on the test server from 0.44 sec to 0.06 sec, I cant access the production server from home though, so I will have to try it tomorrow.

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • How to pass operators as parameters

    - by Rodion Ingles
    I have to load an array of doubles from a file, multiply each element by a value in a table (different values for different elements), do some work on it, invert the multiplication (that is, divide) and then save the data back to file. Currently I implement the multiplication and division process in two separate methods. Now there is some extra work behind the scenes but apart from the specific statements where the multiplication/division occurs, the rest of the code is identical. As you can imagine, with this approach you have to be very careful making any changes. The surrounding code is not trivial, so its either a case of manually editing each method or copying changes from one method to the other and remembering to change the * and / operators. After too many close calls I am fed up of this and would like to make a common function which implements the common logic and two wrapper functions which pass which operator to use as a parameter. My initial approach was to use function pointers: MultiplyData(double data) { TransformData(data, &(operator *)); } DivideData(double data) { TransformData(data, &(operator /)); } TransformData(double data, double (*func)(double op1, double op2)) { /* Do stuff here... */ } However, I can't pass the operators as pointers (is this because it is an operator on a native type?), so I tried to use function objects. Initially I thought that multiplies and divides functors in <functional> would be ideal: MultiplyData(double data) { std::multiplies<double> multFunct; TransformData(data, &multFunct); } DivideData(double data) { std::divides<double> divFunct; TransformData(data, &divFunct); } TransformData(double data, std::binary_function<double, double, double> *funct) { /* Do stuff here... */ } As you can see I was trying to use a base class pointer to pass the functor polymorphically. The problem is that std::binary_function does not declare an operator() member for the child classes to implement. Is there something I am missing, or is the solution to implement my own functor heirarchy (which really seems more trouble than it is worth)?

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >