Search Results

Search found 600 results on 24 pages for 'splitting'.

Page 20/24 | < Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >

  • Out of memory while iterating through rowset

    - by Phliplip
    Hi All, I have a "small" table of 60400 rows with zipcode data. I wan't to iterate through them all, update a column value, and then save it. The following is part of my Zipcodes model which extends My_Db_Table that a totalRows function that - you guessed it.. returns the total number of rows in the table (60400 rows) public function normalizeTable() { $this->getAdapter()->setProfiler(false); $totalRows = $this->totalRows(); $rowsPerQuery = 5; for($i = 0; $i < $totalRows; $i = $i + $rowsPerQuery) { $select = $this->select()->limit($i, $rowsPerQuery); $rowset = $this->fetchAll($select); foreach ($rowset as $row) { $row->{self::$normalCityColumn} = $row->normalize($row->{self::$cityColumn}); $row->save(); } unset($rowset); } } My rowClass contains a normalize function (basicly a metaphone wrapper doing some extra magic). At first i tried a plain old $this-fetchAll(), but got a out of memory (128MB) right away. Then i tried splitting the rowset into chunks, only difference is that some rows actually gets updated. Any ideas on how i can acomplish this, or should i fallback to ye'olde mysql_query()

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Address String Split in javscript

    - by Arasoi
    Ok folks I have bombed around for a few days trying to find a good solution for this one. What I have is two possible address formats. 28 Main St Somecity, NY 12345-6789 or Main St Somecity, Ny 12345-6789 What I need to do Is split both strings down into an array structured as such address[0] = HousNumber address[1] = Street address[2] = City address[3] = State address[4] = ZipCode My major problem is how to account for the lack of a house number. with out having the whole array shift the data up one. address[0] = Street address[1] = City address[2] = State address[3] = ZipCode [Edit] For those that are wondering this is what i am doing atm . (cleaner version) place = response.Placemark[0]; point = new GLatLng(place.Point.coordinates[1],place.Point.coordinates[0]); FCmap.setCenter(point,12); var a = place.address.split(','); var e = a[2].split(" "); var x = a[0].split(" "); var hn = x.filter(function(item,index){ return index == 0; }); var st = x.filter(function(item,index){ return index != 0; }); var street = ''; st.each(function(item,index){street += item + ' ';}); results[0] = new Hash({ FullAddie: place.address, HouseNum: hn[0], Dir: '', Street: street, City: a[1], State: e[1], ZipCode: e[2], GPoint: new GMarker(point), Lat: place.Point.coordinates[1], Lng: place.Point.coordinates[0] }); // End Address Splitting

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • How do I check if a variable has been initialized

    - by Javier Parra
    First of all, I'm fairly new to Java, so sorry if this question is utterly simple. The thing is: I have a String[] s made by splitting a String in which every item is a number. I want to cast the items of s into a int[] n. s[0] contains the number of items that n will hold, effectively s.length-1. I'm trying to do this using a foreach loop: int[] n; for(String num: s){ //if(n is not initialized){ n = new int[(int) num]; continue; } n[n.length] = (int) num; } Now, I realize that I could use something like this: int[] n = new int[(int) s[0]]; for(int i=0; i < s.length; i++){ n[n.length] = (int) s[i]; } But I'm sure that I will be faced with that "if n is not initialized initialize it" problem in the future.

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • How should I manage/declare dependencies between open source C# projects?

    - by munificent
    I've got a game (a roguelike to be specific) in C# that I'm in the process of cleaning up to open source. One step I'd like to take is splitting it into three distinct pieces: A simple package of utility classes, things like 2D arrays, vectors, etc. A terminal UI package that gives you a curses-like display. It depends on 1. The actual game, which uses 1 and 2. Right now, these are all separate projects in the same solution, but I'd kind of like to make them completely separate projects (in the "open source project" sense, not the "visual studio project" use of the term) with their own names and repos. I think, at the very least, #1 is generally useful even if you aren't building game, and I don't want someone to have to build an entire game just to get some handy functions. What I'm not sure about is how to handle the dependencies if I split up the solution. If someone decides they want to sync the game, how should I ensure they also get 1 and 2? Include the built dependent .dlls in the games repo? Just document, "you need these other projects and they must be in a path relative to the game like this". Just leave it all one giant solution and a single repo. Something I'm not thinking of?

    Read the article

  • Can I get an XPathNodeIterator directly from an XPath?

    - by Val
    I hope I'm just missing something obvious. I have a number of repeating nodes in an XML document: <root> <parent> <child/> <child/> </parent> </root> I need to examine the contents of each of the <child> elements in turn, so I need an XPathNodeIterator containing the nodeset of all the <child> nodes. If I have an XPath that would select the child nodes, e.g. /root/parent/child, is there any way to feed that directly to a new XPathNodeIterator? Everything I see in the docs and examples indicates I have to first get an XPathNavigator to the <parent>, then Select the child nodes, like: XPathNavigator nav = datasource.CreateNavigator().SelectSingleNode( "/root/parent" ); XPathNodeIterator it = nav.Select( "./child" ); foreach ( child in it ) { /* do something */ } I was hoping to skip the XPathNavigator, and intialize the XPathNodeIterator with XPath to the child nodes directly, something like: XpathNodeIterator it = new XpathNodeIterator("/root/parent/child"); foreach ( child in it ) { /* do something */ } Possible? The benefit is not only saving a line of code, but I can use a single XPath expression, rather than splitting the path to the <child> nodes in two, first to get the parent element, then to select its children.

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • XMLHttpRequest POST Data Size

    - by usurper
    Hi, Is there a size limit to a XHR POST request? I am using the POST method for saving textdata into MySQL using PHP script and the data is cut off. Firebug sends me the following message: ... Firebug request size limit has been reached by Firebug. ... This is my code for sending the data: function makeXHR(recordData) { xmlhttp = createXHR(); var body = "q=" + encodeURIComponent(recordData); xmlhttp.open("POST", "insertRowData.php", true); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", body.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange = function() { if(xmlhttp.readyState == 4 && xmlhttp.status == 200) { //alert(xmlhttp.responseText); alert("Records were saved successfully!"); } } xmlhttp.send(body); } The only solution I can think of is splitting the data and making a queue of XHR requests but I don't like it. Is there another way?

    Read the article

  • Split UInt32 (audio frame) into two SInt16s (left and right)?

    - by morgancodes
    Total noob to anything lower-level than Java, diving into iPhone audio, and realing from all of the casting/pointers/raw memory access. I'm working with some example code wich reads a WAV file from disc and returns stereo samples as single UInt32 values. If I understand correctly, this is just a convenient way to return the 32 bits of memory required to create two 16 bit samples. Eventually this data gets written to a buffer, and an audio unit picks it up down the road. Even though the data is written in UInt32-sized chunks, it eventually is interpreted as pairs of 16-bit samples. What I need help with is splitting these UInt32 frames into left and right samples. I'm assuming I'll want to convert each UInt32 into an SInt16, since an audio sample is a signed value. It seems to me that for efficiency's sake, I ought to be able to simply point to the same blocks in memory, and avoid any copying. So, in pseudo-code, it would be something like this: UInt32 myStereoFrame = getFramefromFilePlayer; SInt16* leftChannel = getFirst16Bits(myStereoFrame); SInt16* rightChannel = getSecond16Bits(myStereoFrame); Can anyone help me turn my pseudo into real code?

    Read the article

  • an better way to do this code

    - by user295189
    myarray[] = $my[$addintomtarray] //52 elements for ($k=0; $k <= 12; $k++){ echo $myarray[$k].' '; } echo ''; for ($k=13; $k < 26; $k++){ echo $myarray[$k].' '; } echo ''; for ($k=26; $k < 39; $k++){ echo $myarray[$k].' '; } echo ''; for ($k=39; $k <= 51; $k++){ echo $myarray[$k].' '; } how to shorten this array code...all I am doing here is splitting an array of 52 elements into 4 chunck of 13 elements each. In addition I am adding formation with br and space thanks

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Ruby - Writing Hpricot data to a file

    - by John
    Hey everyone, I am currently doing some XML parsing and I've chosen to use Hpricot because of it's ease of use and syntax, however I am running into some problems. I need to write a piece of XML data that I have found out to another file. However, when I do this the format is not preserved. For example, if the content should look like this: <dict> <key>item1</key><value>12345</value> <key>item2</key><value>67890</value> <key>item3</key><value>23456</value> </dict> And assuming that there are many entries like this in the document. I am iterating through the 'dict' items by using hpricot_element = Hpricot(xml_document_body) f = File.new('some_new_file.xml') (hpricot_element/:dict).each { |dict| f.write( dict.to_original_html ) } After using the above code, I would expect that the output look like the following exactly like the XML shown above. However to my surprise, the output of the file looks more like this: <dict>\n", " <key>item1</key><value>12345</value>\n", " <key>item2</key><value>67890</value>\n", " <key>item3</key><value>23456</value\n", " </dict> I've tried splitting at the "\n" characters and writing to the file one line at a time, but that didn't seem to work either as it did not recognize the "\n" characters. Any help is greatly appreciated. It might be a very simple solution, but I am having troubling finding it. Thanks!

    Read the article

  • How can i split up a component using cfinclude and still use inheritance?

    - by rip747
    Note: this is just a simplized example of what i'm trying to do to get the idea across. The problem I'm having is that I want to use cfinclude inside cfcomponent so that i can group like methods into separate files for more manageability. The problem I'm running into is when i try to extend another component that also uses cfinclude to manage it's method as demostrated below. Note that ComponentA extends ComponentB: ComponentA ========== <cfcomponent output="false" extends="componentb"> <cfinclude template="componenta/methods.cfm"> </cfcomponent> componenta/methods.cfm ====================== <cffunction name="a"><cfreturn "componenta-a"></cffunction> <cffunction name="b"><cfreturn "componenta-b"></cffunction> <cffunction name="c"><cfreturn "componenta-c"></cffunction> <cffunction name="d"><cfreturn super.a()></cffunction> ComponentB ========== <cfcomponent output="false"> <cfinclude template="componentb/methods.cfm"> </cfcomponent> componentb/methods.cfm ====================== <cffunction name="a"><cfreturn "componentb-a"></cffunction> <cffunction name="b"><cfreturn "componentb-b"></cffunction> <cffunction name="c"><cfreturn "componentb-c"></cffunction> The issue is that when i try to initialize ComponentA I get an the error: "Routines cannot be declared more than once. The routine a has been declared twice in different templates." The whole reason for this is because when you use cfinclude it's evaluated at RUN TIME instead of COMPILE TIME. Short of moving the methods into the components themselves and eliminating the use of cfinclude, how can i get around this or does someone have a better idea splitting up large components?

    Read the article

  • How to find a particular string in a paragraph in android?

    - by user1448108
    In my project the data is stored in html format along with image tag. For example the following data is stored in html format and it contains 2 to 3 images. Mother Teresa as she is commonly known, was born Agnes Gonxha Bojaxhiu. Although born on the 26 August 1910, she considered 27 August, the day she was baptized, to be her "true birthday". “By blood, I am Albanian. By citizenship, an Indian. By faith, I am a Catholic nun. As to my calling, I belong to the world. As to my heart, I belong entirely to the Heart of Jesus." img ----- src="image1.png" ---- Mother Teresa founded the Missionaries of Charity, a Roman Catholic religious congregation, which in 2012 consisted of over 4,500 sisters and is active in 133 countries.img ----- src="image2.png" ----. She was the recipient of numerous honours including the 1979 Nobel Peace Prize. She refused the conventional ceremonial banquet given to laureates, and asked that the $192,000 funds be given to the poor in India. img ----- src="image3.png" ---- Now from the above data I need to find how many images the paragraph has and need to get all the image names along with the extensions and should display them in android. Tried with splitting but did not work. Please help me regarding this. Thanks in advance

    Read the article

  • How can I pivot these key+values rows into a table of complete entries?

    - by CodexArcanum
    Maybe I demand too much from SQL but I feel like this should be possible. I start with a list of key-value pairs, like this: '0:First, 1:Second, 2:Third, 3:Fourth' etc. I can split this up pretty easily with a two-step parse that gets me a table like: EntryNumber PairNumber Item 0 0 0 1 0 First 2 1 1 3 1 Second etc. Now, in the simple case of splitting the pairs into a pair of columns, it's fairly easy. I'm interested in the more advanced case where I might have multiple values per entry, like: '0:First:Fishing, 1:Second:Camping, 2:Third:Hiking' and such. In that generic case, I'd like to find a way to take my 3-column result table and somehow pivot it to have one row per entry and one column per value-part. So I want to turn this: EntryNumber PairNumber Item 0 0 0 1 0 First 2 0 Fishing 3 1 1 4 1 Second 5 1 Camping Into this: Entry [1] [2] [3] 0 0 First Fishing 1 1 Second Camping Is that just too much for SQL to handle, or is there a way? Pivots (even tricky dynamic pivots) seem like an answer, but I can't figure how to get that to work.

    Read the article

  • Where to start to create an HTML website with 2 tables read from csv files using anything but php

    - by CodingIsAwesome
    I want to design a website which displays on loading two tables each with it's respective data from a csv file. Then every minute the website automatically refreshes. This problem seems so simple! But yet the solution eludes me. All of the files will be contained in 1 directory, not on a server but on a local machine. Such as sitting on the desktop. I understand if I use javascript I have to use ADO, and I'm still trying to work out how to use ASP. I am new with both languages. So far the only restriction is that I can't use PHP. So the jist so far as I can think right now is: 1. read the file 2. place the file into an array by splitting at the commas 3. write the array into td's ????? 4. then print all this out into a div ???? I have googled my heart out and can't seem to find what I'm looking for. or even piece together what I'm looking for. Everything with javascript and ADO's leads me to dead ends, I can't find anything on ASP that is helpful. Could someone please write up some sample code for a resource? Or have a better solution?

    Read the article

  • Creating and parsing huge strings with javascript?

    - by user246114
    Hi, I have a simple piece of data that I'm storing on a server, as a plain string. It is kind of ridiculous, but it looks like this: name|date|grade|description|name|date|grade|description|repeat for a long time this string can be up to 1.4mb in size. The idea is that it's a bunch of student records, just strung together with a simple pipe delimeter. It's a very poor serialization method. Once this massive string is pushed to the client, it is split along the pipes into student records again, using javascript. I've been timing how long it takes to create, and split, these strings on the client side. The times are actually quite good, the slowest run I've seen on a few different machines is 0.2 seconds for 10,000 'student records', which has a final string size of ~1.4mb. I realize this is quite bizarre, just wondering if there are any inherent problems with creating and splitting such large strings using javascript? I don't know how different browsers implement their javascript engines. I've tried this on the 'major' browsers, but don't know how this would perform on earlier versions of each. Yeah looking for any comments on this, this is more for fun than anything else! Thanks

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • problem with a string's format in c++ while doing tcp communication

    - by james t
    hi, i am building a simple c++ client, i am splitting the info i get from the server to frames, and pass each frame to a function that processes it, i split the frame into lines using Poco::StringTokenizer tokenizer(frame, "\n"); i take the first line of the tokenizer which represents the type of frame StmpCommand command(tokenizer[0]); a StmpCommand is an enum with the different types of messages and the constructor works as follows : StmpCommand(std::string command): commandType_() { bool x=command=="CONNECTED"; std::cout<<x<<std::endl; if ("SUBSCRIBE" == command) commandType_ = SUBSCRIBE; else if ("UNSUBSCRIBE" == command) commandType_ = UNSUBSCRIBE; else if ("SEND" == command) commandType_ = SEND; else if ("BEGIN" == command) commandType_ = BEGIN; else if ("COMMIT" == command) commandType_ = COMMIT; else if ("CONNECT" == command) commandType_ = CONNECT; else if ("MESSAGE" == command) commandType_ = MESSAGE; else if ("RECEIPT" == command) commandType_ = RECEIPT; else if ("CONNECTED" == command) commandType_ = CONNECTED; else if ("DISCONNECT" == command) commandType_ = DISCONNECT; else if ("ERROR" == command) commandType_ = ERROR; else { std::cerr<<"Error in building StmpCommand object, unknown type - "<<command<<std::endl; } } the first frame i am trying to proccess is a CONNECTED frame therefor i try to create a StmpCommand with CONNECTED as the constructor's only argument and for some reason i am getting an : Error in building StmpCommand object, unknown type - CONNECTED i am clearly passing a string containing CONNECTED but i'm guessing there is something else there that isn't allowing the condition else if ("CONNECTED" == command) to hap

    Read the article

  • How do you deal with breaking changes in a Rails migration?

    - by Adam Lassek
    Let's say I'm starting out with this model: class Location < ActiveRecord::Base attr_accessible :company_name, :location_name end Now I want to refactor one of the values into an associated model. class CreateCompanies < ActiveRecord::Migration def self.up create_table :companies do |t| t.string :name, :null => false t.timestamps end add_column :locations, :company_id, :integer, :null => false end def self.down drop_table :companies remove_column :locations, :company_id end end class Location < ActiveRecord::Base attr_accessible :location_name belongs_to :company end class Company < ActiveRecord::Base has_many :locations end This all works fine during development, since I'm doing everything a step at a time; but if I try deploying this to my staging environment, I run into trouble. The problem is that since my code has already changed to reflect the migration, it causes the environment to crash when it attempts to run the migration. Has anyone else dealt with this problem? Am I resigned to splitting my deployment up into multiple steps?

    Read the article

  • Joining two mysql_fetch_arrays

    - by John Harbert
    I am trying to piece two queries together. Below is the code Im using. However the table is splitting up the data. How can I remedy this? Or what better solutions are there? while($row = mysql_fetch_array($result)) { echo "<tr id='centered' >"; echo "<td class='leftalign'>" . $row['Quarter_Name'] . "</td>"; echo "<td>" . $row['Quarterly_yield'] . "</td>"; echo "<td>" . $row['Quarterly_yield'] . "</td>"; echo "<td>" . $row['Quarterly_yield'] . "</td>"; } while($row = mysql_fetch_array($result8)) { echo "<td>" . $row['Quarterly_yield'] . "</td>"; }

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >