Search Results

Search found 2571 results on 103 pages for 'alex rose'.

Page 101/103 | < Previous Page | 97 98 99 100 101 102 103  | Next Page >

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Top-Rated JavaScript Blogs

    - by Andreas Grech
    I am currently trying to find some blogs that talk (almost solely) on the JavaScript Language, and this is due to the fact that most of the time, bloggers with real life experience at work or at home development can explain more clearly and concisely certain quirks and hidden features than most 'Official Language Specifications' Below find a list of blogs that are JavaScript based (will update the list as more answers flow in): DHTML Kitchen, by Garrett Smith Robert's Talk, by Robert Nyman EJohn, by John Resig (of jQuery) Crockford's JavaScript Page, by Douglas Crockford Dean.edwards.name, by Dean Edwards Ajaxian, by various (@Martin) The JavaScript Weblog, by various SitePoint's JavaScript and CSS Page, by various AjaxBlog, by various Eric Lippert's Blog, by Eric Lippert (talks about JScript and JScript.Net) Web Bug Track, by various (@scunliffe) The Strange Zen Of JavaScript , by Scott Andrew Alex Russell (of Dojo) (@Eran Galperin) Ariel Flesler (@Eran Galperin) Nihilogic, by Jacob Seidelin (@llimllib) Peter's Blog, by Peter Michaux (@Borgar) Flagrant Badassery, by Steve Levithan (@Borgar) ./with Imagination, by Dustin Diaz (@Borgar) HedgerWow (@Borgar) Dreaming in Javascript, by Nosredna spudly.shuoink.com, by Stephen Sorensen Yahoo! User Interface Blog, by various (@Borgar) remy sharp's b:log, by Remy Sharp (@Borgar) JScript Blog, by the JScript Team (@Borgar) Dmitry Baranovskiy’s Web Log, by Dmitry Baranovskiy James Padolsey's Blog (@Kenny Eliasson) Perfection Kills; Exploring JavaScript by example, by Juriy Zaytsev DailyJS (@Ric) NCZOnline (@Kenny Eliasson), by Nicholas C. Zakas Which top-rated blogs am I currently missing from the above list, that you think should be imperative to any JavaScript developer to read (and follow) concurrently?

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • How to prevent buffer overflow in C/C++?

    - by alexpov
    Hello, i am using the following code to redirect stdout to a pipe, then read all the data from the pipe to a buffer. I have 2 problems: first problem: when i send a string (after redirection) bigger then the pipe's BUFF_SIZE, the program stops responding (deadlock or something). second problem: when i try to read from a pipe before something was sent to stdout. I get the same response, the program stops responding - _read command stuck's ... The issue is that i don't know the amount of data that will be sent to the pipe after the redirection. The first problem, i don't know how to handle and i'll be glad for help. The second problem i solved by a simple workaround, right after the redirection i print space character to stdout. but i guess that this solution is not the correct one ... #include <fcntl.h> #include <io.h> #include <iostream> #define READ 0 #define WRITE 1 #define BUFF_SIZE 5 using namespace std; int main() { int stdout_pipe[2]; int saved_stdout; saved_stdout = _dup(_fileno(stdout)); // save stdout if(_pipe(stdout_pipe,BUFF_SIZE, O_TEXT) != 0 ) // make a pipe { exit(1); } fflush( stdout ); if(_dup2(stdout_pipe[1], _fileno(stdout)) != 0 ) //redirect stdout to the pipe { exit(1); } ios::sync_with_stdio(); setvbuf( stdout, NULL, _IONBF, 0 ); //anything sent to stdout goes now to the pipe //printf(" ");//workaround for the second problem printf("123456");//first problem char buffer[BUFF_SIZE] = {0}; int nOutRead = 0; nOutRead = _read(stdout_pipe[READ], buffer, BUFF_SIZE); //second problem buffer[nOutRead] = '\0'; // reconnect stdout if (_dup2(saved_stdout, _fileno(stdout)) != 0 ) { exit(1); } ios::sync_with_stdio(); printf("buffer: %s\n", buffer); } Thanks, Alex

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • How to parse text fragments located outside tags (inbetween tags) by simplehtmldom?

    - by moogeek
    Hello! I'm using simplehtmldom to parse html and I'm stuck in parsing plaintext located outside of any tag (but between two different tags): <div class="text_small"> <b>?dress:</b> 7 Hange Road<br> <b>Phone:</b> 415641587484<br> <b>Contact:</b> Alex<br> <b>Meeting Time:</b> 12:00-13:00<br> </div> Is it possible to get these values of Adress, Phone, Contact, Meeting Time? I wonder if there is a opportunity to pass CSS Selectors into nextSibling/previousSibling functions... foreach($html->find('div.text_small') as $div_descr) { foreach($div_descr->find('b') as $b) { if ($b->innertext=="?dress:") {//someaction } if ($b->innertext=="Phone:") { //someaction } if ($b->innertext=="Contact:") { //someaction } if ($b->innertext=="Meeting Time:") { //someaction } } } What I should use instead "someaction" ? upd. Yes, I don't have an access for editing the target page. Otherwise, would it be worth to? :)

    Read the article

  • How do I store complex objects in javascript?

    - by Colen
    Hello, I need to be able to store objects in javascript, and access them very quickly. For example, I have a list of vehicles, defined like so: { "name": "Jim's Ford Focus", "color": "white", isDamaged: true, wheels: 4 } { "name": "Bob's Suzuki Swift", "color": "green", isDamaged: false, wheels: 4 } { "name": "Alex's Harley Davidson", "color": "black", isDamaged: false, wheels: 2 } There will potentially be hundreds of these vehicle entries, which might be accessed thousands of times. I need to be able to access them as fast as possible, ideally in some useful way. For example, I could store the objects in an array. Then I could simply say vehicles[0] to get the Ford Focus entry, vehicles[1] to get the Suzuki Swift entry, etc. However, how do I know which entry is the Ford Focus? I want to simply ask "find me Jim's Ford Focus" and have the object returned to me, as fast as possible. For example, in another language, I might use a hash table, indexed by name. How can I do this in javascript? Or, is there a better way? Thanks.

    Read the article

  • How can I create a dynamic LINQ query in C# with possible multiple group by clauses?

    - by FordPrefect141
    I have been a programmer for some years now but I am a newcomer to LINQ and C# so forgive me if my question sounds particularly stupid. I hope someone may be able to point me in the right direction. My task is to come up with the ability to form a dynamic multiple group by linq query within a c# script using a generic list as a source. For example, say I have a list containing multiple items with the following structure: FieldChar1 - character FieldChar2 - character FieldChar3 - character FieldNum1 - numeric FieldNum2 - numeric In a nutshell I want to be able to create a LINQ query that will sum FieldNum1 and FieldNum2 grouped by any one, two or all three of the FieldChar fields that will be decided at runtime depending on the users requirements as well as selecting the FieldChar fields in the same query. I have the dynamic.cs in my project which icludes a GroupByMany extension method but I have to admit I am really not sure how to put these to use. I am able to get the desired results if I use a query with hard-wired group by requests but not dynamically. Apologies for any erroneous nomenclature, I am new to this language but any advice would be most welcome. Many thanks Alex

    Read the article

  • ExecutorService - scaling

    - by Stanciu Alexandru-Marian
    I am trying to write a program in Java using ExecutorService and it's function invokeAll. My question is: does the invokeAll functions solve the tasks simultaneously? I mean, if i have two processors, there will be two workers in the same time? Because a can't make it to scale correct. It takes the same time to complete the problem if i give newFixedThreadPool(2) or 1. List<Future<PartialSolution>> list = new ArrayList<Future<PartialSolution>>(); Collection<Callable<PartialSolution>> tasks = new ArrayList<Callable<PartialSolution>>(); for(PartialSolution ps : wp) { tasks.add(new Map(ps, keyWords)); } list = executor.invokeAll(tasks); Map is a class that implements Callable and wp is a vector of Partial Solutions, a class that holds some informations in different times. Why doesn't it scale? What could be the problem? Thank you, Alex

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • sorting names in a linked list

    - by sil3nt
    Hi there, I'm trying to sort names into alphabetical order inside a linked list but am getting a run time error. what have I done wrong here? #include <iostream> #include <string> using namespace std; struct node{ string name; node *next; }; node *A; void addnode(node *&listpointer,string newname){ node *temp; temp = new node; if (listpointer == NULL){ temp->name = newname; temp->next = listpointer; listpointer = temp; }else{ node *add; add = new node; while (true){ if(listpointer->name > newname){ add->name = newname; add->next = listpointer->next; break; } listpointer = listpointer->next; } } } int main(){ A = NULL; string name1 = "bob"; string name2 = "tod"; string name3 = "thomas"; string name4 = "kate"; string name5 = "alex"; string name6 = "jimmy"; addnode(A,name1); addnode(A,name2); addnode(A,name3); addnode(A,name4); addnode(A,name5); addnode(A,name6); while(true){ if(A == NULL){break;} cout<< "name is: " << A->name << endl; A = A->next; } return 0; }

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • JavaScript function, which reads connections between objects

    - by Nikita Sumeiko
    I have a JavaScript literal: var members = { "mother": { "name" : "Mary", "age" : "48", "connection": { "brother" : "sun" } }, "father": { "name" : "Bill", "age" : "50" }, "brother": { "name" : "Alex", "age" : "28" } } Than I have a function, which should read connections from the literal above. It looks like this: function findRelations(members){ var wires = new Array(); var count = 0; for (n = 0; n < members.length; n++){ alert(members.length); // this alert is undefined if (members[n].connection){ for (i = 0; i < members[n].connection[0].length; i++){ var mw = new Array(); var destination = 0; for (m = 0; m < members.length; m ++){ if (members[m] == members[n].connection[0]){ destination = m; mw = [n, destination]; wires [count] = mw; count++; } } } } } return wires; } However, when I run this function, I get nothing. And the first alert, which is placed inside the function shows 'undefined' at all. findRelations(members); alert("Found " + wires.length + " connections"); I guess that's because of JavaScript literal. Could you suggest how to change a function or perhaps to change litteral to JSON array to get it work?! And at the end to get 'm' and 'n' values as numbers.

    Read the article

  • I need to speed up a function. Should I use cython, ctypes, or something else?

    - by Peter Stewart
    I'm having a lot of fun learning Python by writing a genetic programming type of application. I've had some great advice from Torsten Marek, Paul Hankin and Alex Martelli on this site. The program has 4 main functions: generate (randomly) an expression tree. evaluate the fitness of the tree crossbreed mutate As all of generate, crossbreed and mutate call 'evaluate the fitness'. it is the busiest function and is the primary bottleneck speedwise. As is the nature of genetic algorithms, it has to search an immense solution space so the faster the better. I want to speed up each of these functions. I'll start with the fitness evaluator. My question is what is the best way to do this. I've been looking into cython, ctypes and 'linking and embedding'. They are all new to me and quite beyond me at the moment but I look forward to learning one and eventually all of them. The 'fitness function' needs to compare the value of the expression tree to the value of the target expression. So it will consist of a postfix evaluator which will read the tree in a postfix order. I have all the code in python. I need advice on which I should learn and use now: cython, ctypes or linking and embedding. Thank you.

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • finding common prefix of array of strings

    - by bumperbox
    I have an array like this $sports = array( 'Softball - Counties', 'Softball - Eastern', 'Softball - North Harbour', 'Softball - South', 'Softball - Western' ); and i would like to find the longest common part of the string so in this instance, it would be 'Softball - '; I am thinking that I would follow the this process $i = 1; // loop to the length of the first string while ($i < strlen($sports[0]) { // grab the left most part up to i in length $match = substr($sports[0], 0, $i); // loop through all the values in array, and compare if they match foreach ($sports as $sport) { if ($match != substr($sport, 0, $i) { // didn't match, return the part that did match return substr($sport, 0, $i-1); } } // foreach // increase string length $i++; } // while // if you got to here, then all of them must be identical Questions is there a built in function or much simpler way of doing this ? for my 5 line array that is probably fine, but if i were to do several thousand line arrays, there would be a lot of overhead, so i would have to be move calculated with my starting values of $i, eg $i = halfway of string, if it fails, then $i/2 until it works, then increment $i by 1 until we succeed. so that we are doing the least number of comparisons to get a result If there a formula/algorithm out already out there for this kind of problem ? thanks alex

    Read the article

  • xsl:for-each not supported in this context

    - by alexbf
    Hi! I have this XSLT document : <xsl:stylesheet version="1.0" xmlns:mstns="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:template match="/MyDocRootElement"> <xs:schema id="DataSet" targetNamespace="http://www.w3.org/2001/XMLSchema" attributeFormDefault="qualified" elementFormDefault="qualified" > <xs:element name="DataSet" msdata:IsDataSet="true"> <xs:complexType> <xs:choice maxOccurs="unbounded"> <xs:element name="Somename"> </xs:element> <xs:element name="OtherName"> </xs:element> <!-- FOR EACH NOT SUPPORTED? --> <xsl:for-each select="OtherElements/SubElement"> <xs:element name="OtherName"> </xs:element> </xsl:for-each> </xs:choice> </xs:complexType> </xs:element> </xs:schema> </xsl:template> </xsl:stylesheet> I have a validation error saying that the "for-each element is not supported in this context" I am guessing it has something to do with the xs namespace validation. Any ideas on how can I make this work? (Exclude validation?) Thanks Alex

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • How do I iterate through hierarchical data in a Sql Server 2005 stored proc?

    - by AlexChilcott
    Hi, I have a SqlServer2005 table "Group" similar to the following: Id (PK, int) Name (varchar(50)) ParentId (int) where ParentId refers to another Id in the Group table. This is used to model hierarchical groups such as: Group 1 (id = 1, parentid = null) +--Group 2 (id = 2, parentid = 1) +--Group 3 (id = 3, parentid = 1) +--Group 4 (id = 4, parentid = 3) Group 5 (id = 5, parentid = null) +--Group 6 (id = 6, parentid = 5) You get the picture I have another table, let's call it "Data" for the sake of simplification, which looks something like: Id (PK, int) Key (varchar) Value (varchar) GroupId (FK, int) Now, I am trying to write a stored proc which can get me the "Data" for a given group. For example, if I query for group 1, it returns me the Key-Value-Pairs from Data where groupId = 1. If I query for group 2, it returns the KVPs for groupId = 1, then unioned with those which have groupId = 2 (and duplicated keys are replaced). Ideally, the sproc would also fail gracefully if there is a cycle (ie if group 1's parent is group 2 and group 2's parent is group 1) Has anyone had experience in writing such a sproc, or know how this might be accomplished? Thanks guys, much appreciated, Alex

    Read the article

  • Memory usage of strings (or any other objects) in .Net

    - by ava
    I wrote this little test program: using System; namespace GCMemTest { class Program { static void Main(string[] args) { System.GC.Collect(); System.Diagnostics.Process pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long startBytes = pmCurrentProcess.PrivateMemorySize64; double kbStart = (double)(startBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbStart + "KB."); { int size = 2000000; string[] strings = new string[size]; for(int i = 0; i < size; i++) { strings[i] = "blabla" + i; } } System.GC.Collect(); pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long endBytes = pmCurrentProcess.PrivateMemorySize64; double kbEnd = (double)(endBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbEnd + "KB."); System.Console.WriteLine("Leaked " + (kbEnd - kbStart) + "KB."); System.Console.ReadKey(); } } } The output in Release build is: Currently using 18800KB. Currently using 118664KB. Leaked 99864KB. I assume that the GC.collect call will remove the allocated strings since they go out of scope, but it appears it does not. I do not understand nor can I find an explanation for it. Maybe anyone here? Thanks, Alex

    Read the article

  • Folders and its files do get copied.. pls help me in given code..

    - by OM The Eternity
    I have a joomla folder, and i have a script which has to copy the complete joomla folder to another new folder..below is the code which copies only the files contain in the main folder but NOT the other directories existing in the joomla folder, I know that i have to plcae some check for dir_exist function and create it if do not exist.. also I want this code to perform a function to overrite the previously existing files and folders.. how can i accomplish thisss?????? <?php $source = '/var/www/html/pranav_test/'; $destination = '/var/www/html/parth/'; $sourceFiles = glob($source . '*'); foreach($sourceFiles as $file) { $baseFile = basename($file); if (file_exists($destination . $baseFile)) { $originalHash = md5_file($file); $destinationHash = md5_file($destination . $baseFile); if ($originalHash === $destinationHash) { continue; } } copy($file, $destination . $baseFile); } ?> Thanks to @alex who helped me to get the code... But I need more support pls help...

    Read the article

  • PasswordFiled char[] to String in Java connection MySql?

    - by user1819551
    This is a jFrame to connect to the database and this is in the button connect. My issue is in the passwordField NetBeans make me do a char[], but my .getConnection not let me insert the char[] ERROR: "no suitable method found for getConnection(String,String,char[])". So I will change to String right? So when I change and run the jFrame said access denied. when I start doing the System.out.println(l) " Give me the right answer" Like this: "Alex". But when I do the System.out.println(password) "Give me the Array spaces and not the value" Like this: jdbc:mysql://localhost/home inventory root [C@5be5ab68 <--- Array space . What I doing wrong? try { Class.forName("org.gjt.mm.mysql.Driver"); //Load the driver String host = "jdbc:mysql://"+tServerHost.getText()+"/"+tSchema.getText(); String uName = tUsername.getText(); char[] l = pPassword.getPassword(); System.out.println(l); String password= l.toString(); System.out.println(host+uName+l); con = DriverManager.getConnection(host, uName, password); System.out.println(host+uName+password); } catch (SQLException | ClassNotFoundException ex) { JOptionPane.showMessageDialog(this, ex.getMessage()); } }

    Read the article

  • Travelling MVP #1: Visit to SharePoint User Group Finland

    - by DigiMortal
    My first self organized trip this autumn was visit to SharePoint User Group Finland community evening. As active community leaders who make things like these possible they are worth mentioning and on spug.fi side there was Jussi Roine the one who invited me. Here is my short review about my trip to Helsinki. User group meeting As Helsinki is near Tallinn I went there using ship. It was easy to get from sea port to venue and I had also some minutes of time to visit academic book store. Community evening was held on the ground floor of one city center hotel and room was conveniently located near hotel bar and restaurant. Here is the meeting schedule: Welcome (Jussi Roine) OpenText application archiving and governance for SharePoint (Bernd Hennicke, OpenText) Using advanced C# features in SharePoint development (Alexey Sadomov, NED Consulting) Optimizing public-facing internet sites for SharePoint (Gunnar Peipman) After meeting, of course, local dudes doesn’t walk away but continue with some beers and discussion. Sessions After welcome words by Jussi there was session by Bernd Hennicke who spoke about OpenText. His session covered OpenText history and current moment. After this introduction he spoke about OpenText products for SharePoint and gave the audience good overview about where their SharePoint extensions fit in big picture. I usually don’t like those vendors sessions but this one was good. I mean vendor dudes were not aggressively selling something. They were way different – kind people who introduced their stuff and later answered questions. They acted like good guests. Second speaker was Alexey Sadomov who is working on SharePoint development projects. He introduced some ways how to get over some limitations of SharePoint. I don’t go here deeply with his session but it’s worth to mention that this session was strong one. It is not rear case when developers have to make nasty hacks to SharePoint. I mean really nasty hacks. Often these hacks are long blocks of code that uses terrible techniques to achieve the result. Alexey introduced some very much civilized ways about how to apply hacks. Alex Sadomov, SharePoint MVP, speaking about SharePoint coding tips and tricks on C# I spoke about how I optimized caching of Estonian Microsoft community portal that runs on SharePoint Server and that uses publishing infrastructure. I made no actual demos on SharePoint because I wanted to focus on optimizing process and share some experiences about how to get caches optimized and how to measure caches. Networking After official part there was time to talk and discuss with people. Finns are cool – they have beers and they are glad. It was not big community event but people were like one good family. Developers there work often for big companies and it was very interesting to me to hear about their experiences with SharePoint. One thing was a little bit surprising for me – SharePoint guys in Finland are talking actively also about Office 365 and online SharePoint. It doesn’t happen often here in Estonia. I had to leave a little bit 21:00 to get to my ship back to Tallinn. I am sure spug.fi dudes continued nice evening and they had at least same good time as I did. Do I want to go back to Finland and meet these guys again? Yes, sure, let’s do it again! :)

    Read the article

< Previous Page | 97 98 99 100 101 102 103  | Next Page >