Search Results

Search found 3534 results on 142 pages for 'sets'.

Page 110/142 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • cURL Affects Cookie Retrieval in PHP?

    - by CJ
    I'm not sure if I'm asking this properly. I have two PHP pages located on the same server. The first PHP page sets a cookie with an expiration and the second one checks to see if that cookie was set. if it is set, it returns "on". If it isn't set, it returns "off". If I just run the pages like "www.example.com/set_cookie.php" AND "www.example.com/is_cookie_set.php" I get an "on" from is_cookie_set.php. Heres the problem, on the set_cookie.php file I have a function called is_set. This function executes the following cURL and returns the contents ("on" or "off"). Unfortunately, the contents are always returned as "off". however, if I check the file manually ("www.example.com/is_cookie_set.php") I can see that the cookie was set. Heres the function : <?php function is_set() { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://example.com/is_cookie_set.php'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $contents = curl_exec ($ch); curl_close ($ch); echo $contents; } ?> Please note, I'm not using cURL to GET or SET cookies, only to check a page that checks if the cookie was set. I've looked into CURLOPT_COOKIEJAR, and CURLOPT_COOKIEFILE, but I believe those are for setting cookies via cURL and I don't want to do this.

    Read the article

  • objective C underscore property vs self

    - by user1216838
    I'm was playing around with the standard sample split view that gets created when you select a split view application in Xcode, and after adding a few fields i needed to add a few fields to display them in the detail view. and something interesting happend in the original sample, the master view sets a "detailItem" property in the detail view and the detail view displays it. - (void)setDetailItem:(id) newDetailItem { if (_detailItem != newDetailItem) { _detailItem = newDetailItem; // Update the view. [self configureView]; } i understand what that does and all, so while i was playing around with it. i thought it would be the same if instead of _detailItem i used self.detailItem, since it's a property of the class. however, when i used self.detailItem != newDetailItem i actually got stuck in a loop where this method is constantly called and i cant do anything else in the simulator. my question is, whats the actual difference between the underscore variables(ivar?) and the properties? i read some posts here it seems to be just some objective C convention, but it actually made some difference.

    Read the article

  • Is there an alias for 'this' in TypeScript?

    - by Todd
    I've attempted to write a class in TypeScript that has a method defined which acts as an event handler callback to a jQuery event. class Editor { textarea: JQuery; constructor(public id: string) { this.textarea = $(id); this.textarea.focusin(onFocusIn); } onFocusIn(e: JQueryEventObject) { var height = this.textarea.css('height'); // <-- This is not good. } } Within the onFocusIn event handler, TypeScript sees 'this' as being the 'this' of the class. However, jQuery overrides the this reference and sets it to the DOM object associated with the event. One alternative is to define a lambda within the constructor as the event handler, in which case TypeScript creates a sort of closure with a hidden _this alias. class Editor { textarea: JQuery; constructor(public id: string) { this.textarea = $(id); this.textarea.focusin((e) => { var height = this.textarea.css('height'); // <-- This is good. }); } } My question is, is there another way to access the this reference within the method-based event handler using TypeScript, to overcome this jQuery behavior?

    Read the article

  • What C# container is most resource-efficient for existence for only one operation?

    - by ccornet
    I find myself often with a situation where I need to perform an operation on a set of properties. The operation can be anything from checking if a particular property matches anything in the set to a single iteration of actions. Sometimes the set is dynamically generated when the function is called, some built with a simple LINQ statement, other times it is a hard-coded set that will always remain the same. But one constant always exists: the set only exists for one single operation and has no use before or after it. My problem is, I have so many points in my application where this is necessary, but I appear to be very, very inconsistent in how I store these sets. Some of them are arrays, some are lists, and just now I've found a couple linked lists. Now, none of the operations I'm specifically concerned about have to care about indices, container size, order, or any other functionality that is bestowed by any of the individual container types. I picked resource efficiency because it's a better idea than flipping coins. I figured, since array size is configured and it's a very elementary container, that might be my best choice, but I figure it is a better idea to ask around. Alternatively, if there's a better choice not out of resource-efficiency but strictly as being a better choice for this kind of situation, that would be nice as well.

    Read the article

  • Mysql - Help me alter this query to apply AND logic instead of OR in searching?

    - by sandeepan-nath
    First execute these tables and data dumps :- CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'key1'), (2, 'key2'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (1, 1), (2, 1); The following query finds all the tutors from Tutors_Tag_Relations table which have reference to at least one of the terms "key1" or "key2". SELECT td . * FROM Tutors_Tag_Relations AS td INNER JOIN Tags AS t ON t.id_tag = td.id_tag WHERE t.tag LIKE "%key1%" OR t.tag LIKE "%key2%" Group by td.id_tutor LIMIT 10 Please help me modify this query so that it returns all the tutors from Tutors_Tag_Relations table which have reference to both the terms "key1" and "key2" (AND logic instead of OR logic). Please suggest an optimized query considering huge number of data records (the query should NOT individually fetch two sets of tutors matching each keyword and then find the intersection).

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • How do I use cookies to store users' recent site history(PHP)?

    - by ggfan
    I decided to make a recent view box that allows users to see what links they clicked on before. Whenever they click on a posting, the posting's id gets stored in a cookie and displays it in the recent view box. In my ad.php, I have a definerecentview function that stores the posting's id (so I can call it later when trying to get the posting's information such as title, price from the database) in a cookie. How do I create a cookie array for this? **EXAMPLE:** user clicks on ad.php?posting_id='200' //this is in the ad.php function definerecentview() { $posting_id=$_GET['posting_id']; //this adds 30 days to the current time $Month = 2592000 + time(); $i=1; if (isset($posting_id)){ //lost here for($i=1,$i< ???,$i++){ setcookie("recentviewitem[$i]", $posting_id, $Month); } } } function displayrecentviews() { echo "<div class='recentviews'>"; echo "Recent Views"; if (isset($_COOKIE['recentviewitem'])) { foreach ($_COOKIE['recentviewitem'] as $name => $value) { echo "$name : $value <br />\n"; //right now just shows the posting_id } } echo "</div>"; } How do I use a for loop or foreach loop to make it that whenever a user clicks on an ad, it makes an array in the cookie? So it would be like.. 1. clicks on ad.php?posting_id=200 --- setcookie("recentviewitem[1]",200,$month); 2. clicks on ad.php?posting_id=201 --- setcookie("recentviewitem[2]",201,$month); 3. clicks on ad.php?posting_id=202 --- setcookie("recentviewitem[3]",202,$month); Then in the displayrecentitem function, I just echo however many cookies were set? I'm just totally lost in creating a for loop that sets the cookies. any help would be appreciated

    Read the article

  • Basic user authentication with records in AngularFire

    - by ajkochanowicz
    Having spent literally days trying the different, various recommended ways to do this, I've landed on what I think is the most simple and promising. Also thanks to the kind gents from this SO question: Get the index ID of an item in Firebase AngularFire Curent setup Users can log in with email and social networks, so when they create a record, it saves the userId as a sort of foreign key. Good so far. But I want to create a rule so twitter2934392 cannot read facebook63203497's records. Off to the security panel Match the IDs on the backend Unfortunately, the docs are inconsistent with the method from is firebase user id unique per provider (facebook, twitter, password) which suggest appending the social network to the ID. The docs expect you to create a different rule for each of the login method's ids. Why anyone using 1 login method would want to do that is beyond me. (From: https://www.firebase.com/docs/security/rule-expressions/auth.html) So I'll try to match the concatenated auth.provider with auth.id to the record in userId for the respective registry item. According to the API, this should be as easy as In my case using $registry instead of $user of course. { "rules": { ".read": true, ".write": true, "registry": { "$registry": { ".read": "$registry == auth.id" } } } } But that won't work, because (see the first image above), AngularFire sets each record under an index value. In the image above, it's 0. Here's where things get complicated. Also, I can't test anything in the simulator, as I cannot edit {some: 'json'} To even authenticate. The input box rejects any input. My best guess is the following. { "rules": { ".write": true, "registry": { "$registry": { ".read": "data.child('userId').val() == (auth.provider + auth.id)" } } } } Which both throws authentication errors and simultaneously grants full read access to all users. I'm losing my mind. What am I supposed to do here?

    Read the article

  • PHP session_write_close() keeps sending a set-cookie header

    - by Chiraag Mundhe
    In my framework, I make a number of calls to session_write_close(). Let's assume that a session has been initiated with a user agent. The following code... foreach($i = 0; $i < 3; $i++) { session_start(); session_write_close(); } ...will send the following request header to the browser: Set-Cookie PHPSESSID=bv4d0n31vj2otb8mjtr59ln322; path=/ PHPSESSID=bv4d0n31vj2otb8mjtr59ln322; path=/ There should be no Set-Cookie header because, as I stipulated, the session cookie has already been created on the user's end. But every call to session_write_close() after the first one in the script above will result in PHP instructing the browser to set the current session again. This is not breaking my app or anything, but it is annoying. Does anyone have any insight into preventing PHP from re-setting the cookie with each subsequent call to session_write_close? EDIT The problem seems to be that with every subsequent call to session_start(), PHP re-sets the session cookie to its own SID and sends a Set-Cookie response header. But why??

    Read the article

  • PHP extend a class method that is called from another class which extends it and calls the method wi

    - by dan.codes
    I am trying to extend a class and override one of its methods. lets call that class A. My class, class B, is overiding a protected method. Class C extends class A and Class D extends class C. Inside of Class D, the method I am trying to overwrite is also extended here and that calls parent::mymethodimoverriding. That method does not exist in class C so it goes to it in class A. That is the method I am overiding in class B and obviously you can't extend A with B and have those changes show up in D since it does not fall in line with the class hierarchy. I might be wrong, so correct me please. so if my class b is called and ran then class D gets called it runs the method in A and overwrites what I had set. I am thinking there must be a way to get this to work, I am just missing something. here is an example, as you can see in class A there is a call to setTitle and it is set to "Example" In my class I set it to "NewExample". My class is getting called before class D so when class D is called it goes back to the parent and sets the title back to "Example" class A{ protected function _thefunction(){ setTitle("Example"); } } class B extends A{ protected function _thefunction(){ My new code here setTitle("NewExample"); } } class C extends A{ nothing that matters in here for what I am doing } class D extends C{ protected function _thefunction(){ parent::_thefunction(); additional code here } }

    Read the article

  • Alert and if changes behaviour dom copying

    - by lowkey
    Hi guys! Tried to make a little old school ajax (iframe-javascript) script. A bit of mootools is used for DOM navigation Description: HTML: 1 iframe called "buffer" 1 div called "display" JAVASCRIPT: (short: copy iframe content into a div on the page) 1) onLoad on iframe triggers handler(), a counter makes sure it only run once (this is because otherwise firefox will go into feedback loop. What i think happens: iframe on load handler() copyBuffer change src of iframe , and firefox takes that as an onload again) 2) copybuffer() is called, it sets src of iframe then copies iframe content into div, then erases iframe content THE CODE: count=0; function handler(){ if (count==0){ copyBuffer(); count++; } } function copyBuffer(){ $('buffer').set('src','http://www.somelink.com/'); if (window.frames['buffer'] && $('display') ) { $('display').innerHTML = window.frames['buffer'].document.body.innerHTML; window.frames['buffer'].document.body.innerHTML=""; } } problems: QUESTION 1) nothing happens, the content is not loaded into the div. But if i either: A) remove the counter and let the script run in a feedback loop: the content is suddenly copied into the div, but off course there is a feedback loop going on, you can see it keeps loading in the status bar. OR B) insert an alert in the copyBuffer function. The content is copied, without the feedback loop. why does this happen? QUESTION 2) The If wrapped around the copying code i got from a source on the internet. I am not sure what it does? If i remove it the code does not work in: Safari and Chrome. Many thanks in advance!

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • python matrices - list index out of range

    - by user1888493
    I am writing a function, that takes a matrix as input, such as the one below. Then the it returns the matrix' inverse, where all the 1s are changed to 0s and all the 0s changed to 1s, while keeping the diagonal from top left to bottom right 0s. An example input: g1 = [[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 1], [0, 1, 1, 0]] the function should output this: g1 = [[0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]] When I run the program, it raises a list index out of range error. I'm sure this happens, because the loops I have set up are trying to access values that do not exist. But how do I allow an input of unknown row and column size? I only know how to do this with a single list, but a list of lists? Following you see the transforming function, but not the test function that calls it: def inverse_graph(graph): # take in graph # change all zeros to ones and ones to zeros r, c = 0, 0 # row, column equal zero while (graph[r][c] == 0 or graph[r][c] == 1): # while the current row has a value. while (graph[r][c] == 0 or graph[r][c] == 1): # while the current column has a value if (graph[r][c] == 0): graph[r][c] = 1 elif (graph[r][c] == 1): graph[r][c] = 0 c+=1 c=0 r+=1 c=0 r=0 # sets diagonal to zeros while (g1[r][c] == 0 or g1[r][c] == 1): g1[r][c]=0 c+=1 r+=1 return graph

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • How do you make an added row from QueryAddRow() the first row of the result from a query?

    - by JS
    I am outputting a query but need to specify the first row of the result. I am adding the row with QueryAddRow() and setting the values with QuerySetCell(). I can create the row fine, I can add the content to that row fine. If I leave the argument for the row number off of QuerySetCell() then it all works great as the last result of the query when output. However, I need it to be first row of the query but when I try to set the row attribute with the QuerySetCell it just overwrites the first returned row from my query (i.e. my QueryAddRow() replaces the first record from my query). What I currently have is setting a variable from recordCount and arranging the output but there has to be a really simple way to do this that I am just not getting. This code sets the row value to 1 but overwrites the first returned row from the query. <cfquery name="qxLookup" datasource="#application.datasource#"> SELECT xID, xName, execution FROM table </cfquery> <cfset QueryAddRow(qxLookup)/> <cfset QuerySetCell(qxLookup, "xID","0",1)/> <cfset QuerySetCell(qxLookup, "xName","Delete",1)/> <cfset QuerySetCell(qxLookup, "execution", "Select this to delete",1)/> <cfoutput query="qxLookup"> <tr> <td> <a href="##" onclick="javascript:ColdFusion.navigate('xSelect/x.cfm?xNameVar=#url.xNameVar#&xID=#qxLookup.xID#&xName=#URLEncodedFormat(qxLookup.xName)#', '#xNameVar#');ColdFusion.Window.hide('#url.window#')">#qxLookup.xName#</a> </td> <td>#qxLookup.execution#</td> </tr> </cfoutput> </table> Thanks for any help.

    Read the article

  • finding the maximum in the range

    - by comfreak
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • CakePHP dropping session between pages

    - by DavidYell
    Hi, I have an application with multiple regions and various incoming links. The premise, well it worked before, is that in the app_controller, I break out these incoming links and set them in the session. So I have a huge beforeFilter() in my *app_controller* which catches these and sets two variables in the session. Viewing.region and Search.engine, no problem. The problem arises that the session does not seem to be persistant across page requests. So for example, going to /reviews/write (userReviews/add) should have a session available which was set when the user arrived at the site. Although it seems to have vanished! It would appear that unless $this-params is caught explicitly in the *app_controller* and a session variable written, it does not exist on other pages. So far I have tried, swapping between storing session in 'cake' and 'php' both seem to exhibit the same behaviour. I use 'php' as a default. My Session.timeout is '120', Session.checkAgent is False and Security.level is 'low'. All of which should give enough leniency to the framework to allow sessions the most room to live! I'm a bit stumped as to why the session seems to be either recreated or blanked when a new page is being requested. I have commented out the requestAction() calls to make sure that isn't confusing the session request object also, which doesn't seem to make a difference. Any help would be great, as I don't have to have to recode the site to pass all the various variables via parameters in the url, as that would suck, and it's worked before, thus switching on $this-Session-read('Viewing.region') in all my code!

    Read the article

  • WPF: issue updating UI from background thread

    - by Ted Shaffer
    My code launches a background thread. The background thread makes changes and wants the UI in the main thread to update. The code that launches the thread then waits looks something like: Thread fThread = new Thread(new ThreadStart(PerformSync)); fThread.IsBackground = true; fThread.Start(); fThread.Join(); MessageBox.Show("Synchronization complete"); When the background wants to update the UI, it sets a StatusMessage and calls the code below: static StatusMessage _statusMessage; public delegate void AddStatusDelegate(); private void AddStatus() { AddStatusDelegate methodForUIThread = delegate { _statusMessageList.Add(_statusMessage); }; this.Dispatcher.BeginInvoke(methodForUIThread, System.Windows.Threading.DispatcherPriority.Send); } _statusMessageList is an ObservableCollection that is the source for a ListBox. The AddStatus method is called but the code on the main thread never executes - that is, _statusMessage is not added to _statusMessageList while the thread is executing. However, once it is complete (fThread.Join() returns), all the stacked up calls on the main thread are executed. But, if I display a message box between the calls to fThread.Start() and fThread.Join(), then the status messages are updated properly. What do I need to change so that the code in the main thread executes (UI updates) while waiting for the thread to terminate? Thanks.

    Read the article

  • Can't get findnext property of range class error

    - by Lawrence Knowlton
    I am trying to parse a report in Excel 2007. It is basically a report of accounting charge exceptions. The report has sections with a header for each type of exception. There are types of exceptions that are deleted from the report. I'm using a Do While loop to find each header and if the section needs to be deleted I have it do so. If nothing needs to be deleted the code works fine, but right after a section is deleted I get an "Unable to get the FindNext property of the Range Class" error. Here is my code: Sub merge_All_Section_Headers() ' Description: ' The next portion macro will find and format the Tranaction Source rows in the file ' by checking each row in column A for the following text: TRANSA. If a cell ' has this text in it, it is selected and a function called merge_text_cells ' is run, which performs concatenation of each Transaction Source header row and ' deletes the text from the rest of the cells with broken up text. ' lastRow = ActiveSheet.UsedRange.Rows.Count + 1 Range(lastRow & ":" & lastRow).Delete ActiveSheet.PageSetup.Orientation = xlLandscape With ActiveSheet.Range("A:A") Dim searchString As String searchString = "TRANSA" 'The following sets stringFound to either true or false based on whether or not 'the searchString (TRANSA) is found or not): Set stringFound = .Find(searchString, LookIn:=xlValues, lookat:=xlPart) If Not stringFound Is Nothing Then firstLocation = stringFound.Address Do stringFound.Select lastFound = stringFound.Address merge_Text_Cells If ((InStr(ActiveCell.Text, "CHARGE FILER") = 0) And _ (InStr(ActiveCell.Text, "CREDIT FILER") = 0) And _ (InStr(ActiveCell.Text, "PA MIDNIGHT FINAL") = 0) And _ (InStr(ActiveCell.Text, "BAD DEBT TURNOVER") = 0)) Then section_Del 'Function that deletes unwanted sections End If Range(lastFound).Select Set stringFound = .FindNext(stringFound) Loop While Not stringFound Is Nothing And stringFound.Address <> firstLocation End If End With Like I said it works fine when the section_Del is commented out. Any ideas as to how to remedy this would be greatly appreciated. Thanks!

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • negative values in integer programming model

    - by Lucia
    I'm new at using the glpk tool, and after writing a model for certain integer problem and running the solver (glpsol) i get negative values in some constraint that shouldn't be negative at all: No.Row name Activity Lower bound Upper bound 8 act[1] 0 -0 9 act[2] -3 -0 10 act[2] -2 -0 That constraint is defined like this: act{j in J}: sum{i in I} d[i,j] <= y[j]*m; where the sets and variables used are like this: param m, integer, 0; param n, integer, 0; set I := 1..m; set J := 1..n; var y{j in J}, binary; As the upper bound is negative, i think the problem may be in the y[j]*m parte, of the right side of the inequality.. perhaps something with the multiplication of binarys? or that the j in that side of the constrait is undefined? i dont know... i would be greatly grateful if someone can help me with this! :) and excuse for my bad english thanks in advance!

    Read the article

  • Why does my plist file exist even when I've uninstalled an app and wiped the simulator?

    - by just_another_coder
    I am trying to detect the existence of a plist file on application start. When my data class (descendant of NSObject) is initialized, it loads the plist file. Before it loads, it checks to see if the file exists, and if it doesn't, it sets the data = nil instead of trying to process the array of data in the plist file. However, every time it starts it indicates the file exists. Even when I uninstall the app in the organizer and when wiping the simulator directory (iPhone Simulator/3.0/Applications). I have the following static method in a library class named FileIO: +(BOOL) fileExists:(NSString *)fileName { NSString *path = [FileIO getPath:fileName]; // gets the path to the docs directory if ([[NSFileManager defaultManager] fileExistsAtPath:path]) return YES; else return NO; } In my data class: //if( [FileIO fileExists:kFavorites_Filename] ) [FileIO deleteFile:kFavorites_Filename]; if ([FileIO fileExists:kFavorites_Filename]) {NSLog(@"exists"); The "exists" is logged every time unless I un-comment the method to delete the file. How can the file exist if I've deleted the app? I've deleted the simulator directory and uninstalled on the device but both say it still exists. What gives???

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >