Search Results

Search found 9282 results on 372 pages for 'complete'.

Page 275/372 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Does anyone have documentation on SHGetSysColor?

    - by Paulo Santos
    I'm trying to find any reference for this function, but I haven't found anything. All I have is an obscure KB from Microsoft referencing that a programmer made boo-boo when coding a part of the Windows Mobile 6 where he should call SHGetSysColor but instead he called GetSysColor that gives a complete different color, for the same spec. From what I could gather the GetSysColor read a color value in the registry from HKEY_LOCAL_MACHINE\Software\Microsoft\Color\SHColor or HKEY_LOCAL_MACHINE\Software\Microsoft\Color\DefSHColor and returns the color according to the index. In that registry I have the following value for a standard Win Mobile 6.5 "DefSHColor"=hex:\ ff,00,00,00,00,00,00,00,dd,dd,dd,00,ff,ff,cc,00,ff,ff,ff,00,15,af,bc,00,15,\ af,bc,00,c9,e7,e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00,14,9c,a7,00,14,9c,\ a7,00,15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,c9,e7,e9,00,37,c7,d3,00,37,c7,d3,\ 00,ff,ff,ff,00,00,b7,c9,00,14,9c,a7,00,ff,ff,ff,00,15,af,bc,00,84,84,c3,00,\ 15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,ff,ff,ff,00,00,00,00,00,ff,ff,ff,00,00,\ 00,00,00,ff,ff,ff,00,2e,44,4f,00,00,14,3c,00,00,f0,ff,00,ff,ff,ff,00,c9,e7,\ e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00 And I realized that each four bytes represents a different color (RR,GG,BB,AA -- The AA I'm assuming here, as every color there has the AA byte as 00 which would mean that it's a solid color). What I can't get a fix on is what each index mean, as I have 41 different colors in there. Googling for SHGetSysColor in gives me only 7 matches, two of them are the KB from Microsoft (one in English, the other in French) one is from a Russian site (which I don't read), yet another two are from the freepascal.org and one from Koders.com that is describing the commctl.def file. I went to the commctl.h trying to see if I could find reference tom this function, and found absolutely nothing. No search on MSDN, either fro Google, Bing, or the default MSDN search gave me any result. So, does anyone know what indexes are we talking about here?

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • Relative Paths in .htaccess, how to attach to a variable?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Any implementation that allows me to existence check a directory is more than welcome.

    Read the article

  • complex web forms and javascript

    - by Casey
    I need to create a few data heavy complicated forms. Currently, the information is being entered into a spread sheet, but the users will need to enter the information into the online form where it will be saved to a database. The problem is that the business users currently using the spread sheet aren't going to want to use the online application if it isn't as easy as entering the information into the spread sheet. This is further complicated in that the information they are entering into the spread sheet is represented by three different DB tables where one "object" is composed of two of the others. I would prefer to not have them have to go through multiple forms. Some of what I have been thinking is: Use of auto complete where possible Hiding/removing form fields dynamically possible wizard style page flow?? I've been googling for other data heavy web forms but can't seem to really find any good examples. I am familiar with jQuery and prototypejs and have also tried googling for frameworks designed for data heavy applications but didn't come up with anything. Any thoughts? Thanks.

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • I'm trying to build a query to search against a fulltext index in mysql

    - by Rockinelle
    The table's schema is pretty simple. I have a child table that stores a customer's information like address and phone number. The columns are user_id, fieldname, fieldvalue and fieldname. So each row will hold one item like phone number, address or email. This is to allow an unlimited number of each type of information for each customer. The people on the phones need to look up these customers quickly as they call into our call center. I have experimented with using LIKE% and I'm working with a FULLTEXT index now. My queries work, but I want to make them more useful because if someone searches for a telephone area code like 805 that will bring up many people, and then they add the name Bill to narrow it down, '805 Bill'. It will show EVERY customer that has 805 OR Bill. I want it to do AND searches across multiple rows within each customer. Currently I'm using the query below to grab the user_ids and later I do another query to fetch all the details for each user to build their complete record. SELECT DISTINCT `user_id` FROM `user_details` WHERE MATCH (`fieldvalue`) AGAINST ('805 Bill') Again, I want to do the above query against groups of rows that belong to a single user, but those users have to match the search keywords. What should I do?

    Read the article

  • How do I work with constructs in PHPUnit?

    - by Ben Dauphinee
    I am new into PHPUnit, and just digging through the manual. I cannot find a decent example of how to build a complete test from end to end though, and so, am left with questions. One of these is how can I prep my environment to properly test my code? I am trying to figure out how to properly pass various configuration values needed for both the test setup/teardown methods, and the configs for the class itself. // How can I set these variables on testing start? protected $_db = null; protected $_config = null; // So that this function runs properly? public function setUp(){ $this->_acl = new acl( $this->_db, // The database connection for the class passed // from whatever test construct $this->_config // Config values passed in from construct ); } // Can I just drop in a construct like this, and have it work properly? // And if so, how can I set the construct call properly? public function __construct( Zend_Db_Adapter_Abstract $db, $config = array(), $baselinedatabase = NULL, $databaseteardown = NULL ){ $this->_db = $db; $this->_config = $config; $this->_baselinedatabase = $baselinedatabase; $this->_databaseteardown = $databaseteardown; } // Or is the wrong idea to be pursuing?

    Read the article

  • Memory Leak Looping cfmodule inside cffunction

    - by orangepips
    Hoping someone else can confirm or tell me what I'm doing wrong. I am able to consistently reproduce an OOM running by calling the file oom.cfm (shown below). Using jconsole I am able to see the request consumes memory and never releases it until complete. The issue appears to be calling <cfmodule> inside of <cffunction>, where if I comment out the <cfmodule> call things are garbage collected while the request is running. ColdFusion version: 9,0,1,274733 JVM Arguments java.home=C:/Program Files/Java/jdk1.6.0_18 java.args=-server -Xms768m -Xmx768m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/ -Djava.security.policy={application.home}/servers/41ep8/cfusion.ear/cfusion.war/WEB-INF/cfusion/lib/coldfusion.policy -Djava.security.auth.policy={application.home}/servers/41ep8/cfusion.ear/cfusion.war/WEB-INF/cfusion/lib/neo_jaas.policy -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=56033 Test Case oom.cfm (this calls template.cfm below) <cffunction name="fun" output="false" access="public" returntype="any" hint=""> <cfset var local = structNew()/> <!--- comment out cfmodule and no OOM ---> <cfmodule template="template.cfm"> </cffunction> <cfset size = 1000 * 200> <cfloop from="1" to="#size#" index="idx"> <cfset fun()> <cfif NOT idx mod 1000> <cflog file="se-err" text="#idx# of #size#"> </cfif> </cfloop> template.cfm <!--- I am empty! --->

    Read the article

  • Verify an event was raised by mocked object

    - by joblot
    In my unit test how can I verify that an event is raised by the mocked object. I have a View(UI) -- ViewModel -- DataProvider -- ServiceProxy. ServiceProxy makes async call to serivce operation. When async operation is complete a method on DataProvider is called (callback method is passed as a method parameter). The callback method then raise and event which ViewModel is listening to. For ViewModel test I mock DataProvider and verify that handler exists for event raised by DataProvider. When testing DataProvider I mock ServiceProxy, but how can I test that callback method is called and event is raised. I am using RhinoMock 3.5 and AAA syntax Thanks -- DataProvider -- public partial class DataProvider { public event EventHandler<EntityEventArgs<ProductDefinition>> GetProductDefinitionCompleted; public void GetProductDefinition() { var service = IoC.Resolve<IServiceProxy>(); service.GetProductDefinitionAsync(GetProductDefinitionAsyncCallback); } private void GetProductDefinitionAsyncCallback(ProductDefinition productDefinition, ServiceError error) { OnGetProductDefinitionCompleted(this, new EntityEventArgs<ProductDefinition>(productDefinition, error)); } protected void OnGetProductDefinitionCompleted(object sender, EntityEventArgs<ProductDefinition> e) { if (GetProductDefinitionCompleted != null) GetProductDefinitionCompleted(sender, e); } } -- ServiceProxy -- public class ServiceProxy : ClientBase<IService>, IServiceProxy { public void GetProductDefinitionAsync(Action<ProductDefinition, ServiceError> callback) { Channel.BeginGetProductDefinition(EndGetProductDefinition, callback); } private void EndGetProductDefinition(IAsyncResult result) { Action<ProductDefinition, ServiceError> callback = result.AsyncState as Action<ProductDefinition, ServiceError>; ServiceError error; ProductDefinition results = Channel.EndGetProductDefinition(out error, result); if (callback != null) callback(results, error); } }

    Read the article

  • Is there such a thing as IMAP for podcasts?

    - by Gerrit
    Is there such a thing as IMAP for podcasts? I own a desktop, laptop, iPod, smartphone and a web-client all downloading StackOverflow Podcasts. (among others) They all tell me which episodes are available and which are already played. Everything is a horrible mess, ofcourse. My iPod is somewhat in sync with my desktop, but everything else is a random jungle. The same problem with e-mail is solved by IMAP. Every device gets content and meta-information from one server, and stays in sync with it. Per device, I can set preferences (do or do not download the complete archive including junkmail). Can we implement the IMAP approach for podcasts? Or is there a better metaphore/standard to solve this problem? How will the adoption-strategy look like? (by the way: except for the Windows smartphone, I own a full Apple-stack of products. Even then, I run into this problem) UPDATE The RSS-to-Imap link to sourceforge looks promesting, but very alpha/experimental. UPDATE 2 The one thing RSS is missing is the command/method/parameter/attribute to delete/unread items. RSS can only add, not remove. If RSS(N+1) (3?) could add a value for unread="true|false", it would be solved. If I cache all my RSS-feeds on my own server, and add the attribute myself, I only would have to convince iTunes and every other client to respect that.

    Read the article

  • Data Annotations on ViewModels or Domain Objects

    - by Ahmad
    Where would data annotations be more suitable: ViewModels or Domain Objects or Both I am struggling to decide where these will be more suited. I have not as yet fully utilized them but this question came to mind. From most of the examples I have seen, they are generally placed on Models and simply use the required attributes for validation using ModelState.IsValid. I have also seen another question on SO where the use of data annotations alone is not sufficient and advocate. Option 1 - I will still need to validate again in my service layer. ( I think that my service layer should be complete and this include validation, since its planned to be used elsewhere) Option 2 - How will I then get the benefits of the built in validation both client and server side. Option 3 - there will be a repetition of validation logic, however I was wondering if one could use a MetaData class approach that can be used for both ViewModels and Domain Objects. ( This is completely of the top of my head, so it may be nonsensical) I wonder if this question even makes sense. If not, can someone please help in understanding this better. Have I completely misunderstood the use of data annotations?

    Read the article

  • Advantages/Disadvantages of AIR vs Flex/Web

    - by Lizzan
    Hi all, I'm tasked with writing an application for placing and connecting objects (sort of like a room planner where you can place furniture). I've made a demo using Flash Builder 4 and built it for AIR as a desktop app. Now the client wants the full app, but they and I am unsure whether to continue building it as an AIR app or transform it to a web application using Flex. I tried making a simple conversion of the AIR app to a web app, and most things worked but not all. The things that don't work seem to be simple bugs, though, not complete lack of capability. The capabilities that I'm going to need (except for the modelling) are: Printing of the finished image + a list of the furniture that has been placed A way to save and retrieve finished plans A way to export the list of furniture to Excel format Handling a whole slew of data about the different objects Only the printing has been implemented so far, and seems to work in the web app as well. What advantages/disadvantages are there with the two approaches? Are any of the capabilities I need much worse (or even impossible) to implement in either approach?

    Read the article

  • Why i cant save a long text on my MySQL database?

    - by DomingoSL
    im trying to save to my data base a long text (about 2500 chars) input by my users using a web form and passed to the server using php. When i look in phpmyadmin, the text gets crop. How can i config my table in order to get the complete text? This is my table config: CREATE TABLE `extra_879` ( `id` bigint(20) NOT NULL auto_increment, `id_user` bigint(20) NOT NULL, `title` varchar(300) NOT NULL, `content` varchar(3000) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_user` (`id_user`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; Take a look of the field content that have a limit of 3000 chars, but the texts always gets crop at 690 chars. Thanks for any help! EDIT: I found the problem but i dont know how to solve it. The query is getting crop always in the same char, an special char: ù EDIT 2: This is the cropped query: INSERT INTO extra_879 (id,id_user,title,content) VALUES (NULL,'1','Informazione Extra',' Riconoscimenti Laurea di ingegneria presa a le 22 anni e in il terso posto della promozione Diploma analista di sistemi ottenuto il rating massimo 20/20, primo posto della promozione. Borsa di Studio (offerta dal Ministero Esteri Italiano) vinta nel 2010 (Valutazione del territorio attraverso le nueve tecnologie) Pubblicazione di paper; Stima del RCS della nave CCGS radar sulla base dei risultati di H. Leong e H. Wilson. http://www.ing.uc.edu.vek-azozayalarchivospdf/PAPER-Sarmiento.pdf Tesi di laurea: PROGETTAZIONE E REALIZZAZIONE DI UN SIS-TEMA DI TELEMETRIA GSM PER IL CONTROLLO DELLO STATO DI TRANSITO VEICOLARE E CLIMA (ottenuto il punteggio pi') It gets crop just when the (ottenuto il punteggio più alto) phrase, just when ù appear... EDIT 3: I using jquery + ajax to send the query $.ajax({type: "POST", url: "handler.php", data: "e_text="+ $('#e_text').val() + "&e_title="+ $('#extra_title').val(),

    Read the article

  • WebClient Lost my Session

    - by kamiar3001
    Hi folks I have a problem first of all look at my web service method : [WebMethod(), ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string GetPageContent(string VirtualPath) { WebClient client = new WebClient(); string content=string.Empty; client.Encoding = System.Text.Encoding.UTF8; try { if (VirtualPath.IndexOf("______") > 0) content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath.Replace("__", "."))); else content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath)); } catch { content = "Not Found"; } return content; } As you can see my web service method read and buffer page from it's localhost it works and I use it to add some ajax functionality to my web site it works fine but my problem is Client.DownloadString(..) lost all session because my sessions are all null which are related to this page for more description. at page-load in the page which I want to load from my web service I set session : HttpContext.Current.Session[E_ShopData.Constants.SessionKey_ItemList] = result; but when I click a button in the page this session is null I mean it can't transfer session. How I can solve this problem ? my web service is handled by some jquery code like following : $.ajax({ type: "Post", url: "Services/NewE_ShopServices.asmx" + "/" + "GetPageContent", data: "{" + "VirtualPath" + ":" + mp + "}", contentType: "application/json; charset=utf-8", dataType: "json", complete: hideBlocker, success: LoadAjaxSucceeded, async: true, cache: false, error: AjaxFailed });

    Read the article

  • ob_start() is partially capturing data

    - by AAA
    I am using the following code: PHP: // Generate Guid function NewGuid() { $s = strtoupper(uniqid(rand(),true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); $alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'; function base_encode($num, $alphabet) { $base_count = strlen($alphabet); $encoded = ''; while ($num >= $base_count) { $div = $num/$base_count; $mod = ($num-($base_count*intval($div))); $encoded = $alphabet[$mod] . $encoded; $num = intval($div); } if ($num) $encoded = $alphabet[$num] . $encoded; return $encoded; } function base_decode($num, $alphabet) { $decoded = 0; $multi = 1; while (strlen($num) > 0) { $digit = $num[strlen($num)-1]; $decoded += $multi * strpos($alphabet, $digit); $multi = $multi * strlen($alphabet); $num = substr($num, 0, -1); } return $decoded; } ob_start(); echo base_encode($Guid, $alphabet); //should output: bUKpk $theid = ob_get_contents(); ob_get_clean(); The problem: When i echo $theid, it shows the complete entry, but as it is being inserted into the database, only the first entry in the sequence gets inserted, for example for the entry buKPK, only 'b' is being inserted not the rest.

    Read the article

  • Dev-C++ and Detours compiling error

    - by Julio
    Hello. As title says I'm trying to compile with Dev-C++ a simple DLL using Detours, but I get this error: syntax error before token '&' on this lines: DetourAttach(&(PVOID &)trueMessageBox, hookedMessageBox) DetourDetach(&(PVOID &)trueMessageBox, hookedMessageBox) The complete code is #include <windows.h> #include <detours.h> #pragma comment( lib, "Ws2_32.lib" ) #pragma comment( lib, "detours.lib" ) #pragma comment( lib, "detoured.lib" ) int (WINAPI * trueMessageBox)(HWND hWnd, LPCSTR lpText, LPCSTR lpCaption, UINT uType) = MessageBox; int WINAPI hookedMessageBox(HWND hWnd, LPCSTR lpText, LPCSTR lpCaption, UINT uType) { LPCSTR lpNewCaption = "You've been hijacked"; int iReturn = trueMessageBox(hWnd, lpText, lpNewCaption, uType); return iReturn; } BOOL WINAPI DllMain( HINSTANCE, DWORD dwReason, LPVOID ) { switch ( dwReason ) { case DLL_PROCESS_ATTACH: DetourTransactionBegin(); DetourUpdateThread( GetCurrentThread() ); DetourAttach(&(PVOID &)trueMessageBox, hookedMessageBox) DetourTransactionCommit(); break; case DLL_PROCESS_DETACH: DetourTransactionBegin(); DetourUpdateThread( GetCurrentThread() ); DetourDetach(&(PVOID &)trueMessageBox, hookedMessageBox) DetourTransactionCommit(); break; } return TRUE; }

    Read the article

  • Bespoke Development or Leverage SharePoint With Web Parts etc?

    - by Asim
    Hi all, We are currently in the process of drawing up a solution for an existing client, creating a number of eServices. The client currently have MOSS 2007. The proposed solution is to use MOSS as the launching pad for the eServices… The requirement involves drawing up several online forms which provide registration facilities as well as facilitating a workflow of some sort. I have been told that the proposed solution requires complex web forms. Most are complex forms with parent child details that have multiple windows. The proposed solution is to do some bespoke development, developing ASP .NET forms. These forms would be deployed under the _layouts folder of the current MOSS portal, inheriting the master page design on the current site. I have been told that this approach make development and deployment more simple, as well has having ‘complete integration’ with MOSS. My questions are: Is this the best way to leverage SharePoint – it seems like the proposed solution is not leveraging MOSS at all..! I thought perhaps utilizing Web Parts would be better, but I have been told that this is more complex and developing more smarter intuitive UI is more difficult. Is this really the case? If not, what should be the recommended approach? We will be utilizing Ultimus as the workflow engine. However, I have been recommended K2 Workflows. Anyone used both/have any opinions on either? Many thanks in advance! Kind Regards,

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >