Search Results

Search found 2263 results on 91 pages for 'qantas 94 heavy'.

Page 76/91 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Delphi Application using COMMIT and ROLLBACK for Multiple SQL Updates

    - by Matt
    Is it possible to use the SQL BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION when embedding SQL Queries into an application with mutiple calls to the SQL for Table Updates. For example I have the following code: Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; Q.Close; Q.SQL.Clear; Q.SQL.ADD(<Select Some Data>); Q.Open; Set Some Variables Q.Close; Q.SQL.Clear; Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; What I would like to do is if the second update fails I want to roll back the first transaction. If I set a unique notation for the BEGIN, COMMIT, ROLLBACK so as to specify what is being committed or rolled back, is it feasible. i.e. before the first Update specify BEGIN TRANSACTION_A then after the last update specify COMMIT TRANSACTION_A I hope that makes sense. If I was doing this in a SQL Stored Procedure then I would be able to specify this at the start and end of the procedure, but I have had to break the code down into manageable chunks due to process blocks and deadlocks on a heavy loaded SQL Server.

    Read the article

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • How to trace the connection pool in a Java Web application - DBMS_APPLICATION_INFO

    - by Cleiton Garcia
    Hello, I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database. 5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code: DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "',''); Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION. Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem? I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM. http://stackoverflow.com/questions/53379/using-dbmsapplicationinfo-with-jboss My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server. Thanks.

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • Vim script to TeX source, and launch PDF only if no errors

    - by Jeet
    Hi, I am switching to using Vim for for my LaTeX editing environment. I would like to be able to tex the source file from within Vim, and launch an external viewing if the compile was successful. I know about the Vim-Latex suite, but, if possible, would prefer to avoid using it: it is pretty heavy-weight, hijacks a lot of my keys, and clutters up my vimruntime with a lot of files. Here is what I have now: if exists('b:tex_build_mapped') finish endif " use maparg or mapcheck to see if key is free command! -buffer -nargs=* BuildTex call BuildTex(0, <f-args>) command! -buffer -nargs=* BuildAndViewTex call BuildTex(1, <f-args>) noremap <buffer> <silent> <F9> <Esc>:call BuildTex(0)<CR> noremap <buffer> <silent> <S-F9> <Esc>:call BuildTex(1)<CR> let b:tex_build_mapped = 1 if exists('g:tex_build_loaded') finish endif let g:tex_build_loaded = 1 function! BuildTex(view_results, ...) write if filereadable("Makefile") " If Makefile is available in current working directory, run 'make' with arguments echo "(using Makefile)" let l:cmd = "!make ".join(a:000, ' ') echo l:cmd execute l:cmd if a:view_results && v:shell_error == 0 call ViewTexResults() endif else let b:tex_flavor = 'pdflatex' compiler tex make % if a:view_results && v:shell_error == 0 call ViewTexResults() endif endif endfunction function! ViewTexResults(...) if a:0 == 0 let l:target = expand("%:p:r") . ".pdf" else let l:target = a:1 endif if has('mac') execute "! open -a Preview ".l:target endif endfunction The problem is that v:shell_error is not set, even if there are compile errors. Any suggestions or insight on how to detect whether a compile was successful or not would be greatly appreciated! Thanks!

    Read the article

  • Is it just me? I find LINQ to XML to be sort of cumbersome, compared to XPath.

    - by Cheeso
    I am a C# programmer, so I don't get to take advantage of the cool XML syntax in VB. Dim itemList1 = From item In rss.<rss>.<channel>.<item> _ Where item.<description>.Value.Contains("LINQ") Or _ item.<title>.Value.Contains("LINQ") Using C#, I find XPath to be easier to think about, easier to code, easier to understand, than performing a multi-nested select using LINQ to XML. Look at this syntax, it looks like Greek swearing: var waypoints = from waypoint in gpxDoc.Descendants(gpx + "wpt") select new { Latitude = waypoint.Attribute("lat").Value, Longitude = waypoint.Attribute("lon").Value, Elevation = waypoint.Element(gpx + "ele") != null ? waypoint.Element(gpx + "ele").Value : null, Name = waypoint.Element(gpx + "name") != null ? waypoint.Element(gpx + "name").Value : null, Dt = waypoint.Element(gpx + "cmt") != null ? waypoint.Element(gpx + "cmt").Value : null }; All the casting, the heavy syntax, the possibility for NullPointerExceptions. None of this happens with XPath. I like LINQ in general, and I use it on object collections and databases, but my first go-round with querying XML led me right back to XPath. Is it just me? Am I missing something? EDIT: someone voted to close this as "not a real question". But it is a real question, stated clearly. The question is: Am I misunderstanding something with LINQ to XML?

    Read the article

  • What is your preferred tool stack for PHP development in the Windows Environment?

    - by Tim Visher
    I have been developing basic web sites for awhile now with some PHP thrown in for getting dynamic stuff done. However, I recently decided that it was time I got my hands a little dirtier so I wanted to start to play with the underpinnings of Wordpress and other such apps. I work on a Mac at home and have been using Coda for most of my editing needs and I love it. Also, to manage the services stack I use MAMP at home. However, I've begun to realize that for heavy PHP and Web work (AJAX, etc.), more is needed. I'm very interested to hear the kinds of tools more experienced web developers use on a day to day basis to manage the entire work flow. I found this article over on developer tutorials and decided to go with XAMPP (for the moment) for managing the services. However, IDE, Source Control, Debugger, etc. are all up for grabs. Anyway, your thoughts are much appreciated. And, if at all possible, try to describe your entire stack with what each tool fulfills for your development process.

    Read the article

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • A better python property decorator

    - by leChuck
    I've inherited some python code that contains a rather cryptic decorator. This decorator sets properties in classes all over the project. The problem is that this I have traced my debugging problems to this decorator. Seems it "fubars" all debuggers I've tried and trying to speed up the code with psyco breaks everthing. (Seems psyco and this decorator dont play nice). I think it would be best to change it. def Property(function): """Allow readable properties""" keys = 'fget', 'fset', 'fdel' func_locals = {'doc':function.__doc__} def probeFunc(frame, event, arg): if event == 'return': locals = frame.f_locals func_locals.update(dict((k,locals.get(k)) for k in keys)) sys.settrace(None) return probeFunc sys.settrace(probeFunc) function() return property(**func_locals) Used like so: class A(object): @Property def prop(): def fget(self): return self.__prop def fset(self, value): self.__prop = value ... ect The errors I get say the problems are because of sys.settrace. (Perhaps this is abuse of settrace ?) My question: Is the same decorator achievable without sys.settrace. If not I'm in for some heavy rewrites.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Updating Multiple Drop-Down Lists in AjaxTags

    - by Berin
    I'm using AjaxTags as a defined set of JSP tags to facilitate AJAX programming, saving me from some heavy lifting. The project is in trial mode, so I may not adopt the technology and write my own solution. Here's what I'm running into (code abbreviated) I have a drop-down list that defines an item that populates numerous other drop-down lists. <form> ... <select id="item1" name="item1"> <c:forEach items="${list1}" var="item"> <option value="${item}">${item}</option> </c:forEach> </select> <select id="item2" name="items2"></select> <select id="item3" name="items3"></select> ... </form> <ajax:select source="item1" target="item2" baseUrl="${pageContext.request.contextPath}/doAction.view" parameters="action1 = {item}" /> <ajax:select source="item1" target="item3" baseUrl="${pageContext.request.contextPath}/doAction.view" parameters="action2 = {item}" /> The code above works with my back-end, initiating a call to a servlet class that is listening for calls, parses the parameter and takes action based upon the name of the parameter passed. The problem comes from adding the second ajax:select tag above. An action in drop-down "item 1" only causes "item 3" to be populated with new values. What I want is for an action in "item 1" to populate "item 2" and "item 3" (and 4, 5, 6...). Has anyone else used AjaxTags (ajaxtags.sourceforge.net) and solved a similar solution? Environment Details: Tomcat 5.5.27, Spring 2.0.8, Struts 1.2.9, Java 6

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • What is the best way to get an audio file duration in Android?

    - by Gilead
    Hi! I'm using a SoundPool [ 1 ] to play audio clips in my app. All is fine but I need to know when the clip playback has finished. At the moment I track it in my app by obtaining the duration of each clip using a MediaPlayer [ 2 ] instance. That works fine but it looks wasteful to load each file twice, just to get the duration. I could roughly calculate the duration myself knowing the length of the file (available from the AssetFileDescriptor [ 3 ]) but I'd still need to know the sample rate and the number of channels. I see two potential solutions to that problem: Figuring out when a clip has finished playing (doesn't seem to be possible with SoundClip). Having a class which could load just the header of an audio file and give me the sample rate/number of channels (and, ideally, the sample count to get the exact duration). Any suggestions? Thanks, Max The code I'm using at the moment (works fine but is rather heavy for the purpose): String[] fileNames = ... MediaPlayer mp = new MediaPlayer(); for (String fileName : fileNames) { AssetFileDescriptor d = context.getAssets().openFd(fileName); mp.reset(); mp.setDataSource(d.getFileDescriptor(), d.getStartOffset(), d.getLength()); mp.prepare(); int duration = mp.getDuration(); // ... } On a side note, this question has already been asked [ 4 ] but got no answers.

    Read the article

  • Practices for keeping JavaScript and CSS in sync?

    - by Rene Saarsoo
    I'm working on a large JavaScript-heavy app. Several pieces of JavaScript have some related CSS rules. Our current practice is for each JavaScript file to have an optional related CSS file, like so: MyComponent.js // Adds CSS class "my-comp" to div MyComponent.css // Defines .my-comp { color: green } This way I know that all CSS related to MyComponent.js will be in MyComponent.css. But the thing is, I all too often have very little CSS in those files. And all too often I feel that it's too much effort to create a whole file to just contain few lines of CSS - it would be easier to just hardcode the styles inside JavaScript. But this would be the path to the dark side... Lately I've been thinking of embedding the CSS directly inside JavaScript - so it could still be extracted in the build process and merged into one large CSS file. This way I wouldn't have to create a new file for every little CSS-piece. Additionally when I move/rename/delete the JavaScript file I don't have to additionally move/rename/delete the CSS file. But how to embed CSS inside JavaScript? In most other languages I would just use string, but JavaScript has some issues with multiline strings. The following looks IMHO quite ugly: Page.addCSS("\ .my-comp > p {\ font-weight: bold;\ color: green;\ }\ "); What other practices have you for keeping your JavaScript and CSS in sync?

    Read the article

  • Finding usage of jQuery UI in a big ugly codebase

    - by Daniel Magliola
    I've recently inherited the maintenance of a big, ugly codebase for a production website. Poke your eyes out ugly. And though it's big, it's mostly PHP code, it doesn't have much JS, besides a few "ajaxy" things in the UI. Our main current problem is that the site is just too heavy. Homepage weighs in at 1.6 Mb currently, so I'm trying to clean some stuff out. One of the main wasters is that every single page includes the jQuery UI library, but I don't think it's used at all. It's definitely not being used in the homepage and in most pages, so I want to only include the where necessary. I'm not really experienced with jQuery, i'm more of a Prototype guy, so I'm wondering. Is there anything I could search for that'd let me know where jQuery UI is being used? What i'm looking for is "common strings", component names, etc For example, if this was scriptaculous, i'd look for things like "Draggable", "Effect", etc. Any suggestions for jQuery UI? (Of course, if you can think of a more robust way of removing the tag from pages that don't use it without breaking everything, I'd love to hear about it) Thanks!! Daniel

    Read the article

  • Can this code be further optimized??

    - by kaki
    i understand that the code given below will not be compltely understood unless i explain my whole of previous and next lines of code. But this is part of the code which is causing so much of delay in my project and want to optimize this. i want to know which code part is faulty and how could this be replaced. i guess,few can say that use of this function is heavy compared and other ligher method are available to do this work please help, thanks in advance for i in range(len(lists)): save=database_index[lists[i]] #print save #if save[1]!='text0194'and save[1]!='text0526': using_data[save[0]]=save p=os.path.join("c:/begpython/wavnk/",str(str(str(save[1]).replace('phone','text'))+'.pm')) x1=open(p , 'r') x2=open(p ,'r') for i in range(6): x1.readline() x2.readline() gen = (float(line.partition(' ')[0]) for line in x1) r= min(enumerate(gen), key=lambda x: abs(x[1] - float(save[4]))) #print r[0] a1=linecache.getline(str(str(p).replace('.pm','.mcep')), (r[0]+1)) #print a1 p1=str(str(a1).rstrip('\n')).split(' ') #print p1 join_cost_index_end[save[0]]=p1 #print join_cost_index_end gen = (float(line.partition(' ')[0]) for line in x2) r= min(enumerate(gen), key=lambda x: abs(x[1] - float(save[3]))) #print r[0] a2=linecache.getline(str(str(p).replace('.pm','.mcep')), (r[0]+1)) #print a2 p2=str(str(a2).rstrip('\n')).split(' ') #print p2 join_cost_index_strt[save[0]]=p2 #print join_cost_index_strt j=j+1 #print j #print join_cost_index_end #print join_cost_index_strt enter code here here my database_index has about 2,50,000 entries`

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • Generating JavaScript in C# and subsequent testing

    - by Codebrain
    We are currently developing an ASP.NET MVC application which makes heavy use of attribute-based metadata to drive the generation of JavaScript. Below is a sample of the type of methods we are writing: function string GetJavascript<T>(string javascriptPresentationFunctionName, string inputId, T model) { return @"function updateFormInputs(value){ $('#" + inputId + @"_SelectedItemState').val(value); $('#" + inputId + @"_Presentation').val(value); } function clearInputs(){ " + helper.ClearHiddenInputs<T>(model) + @" updateFormInputs(''); } function handleJson(json){ clearInputs(); " + helper.UpdateHiddenInputsWithJson<T>("json", model) + @" updateFormInputs(" + javascriptPresentationFunctionName + @"()); " + model.GetCallBackFunctionForJavascript("json") + @" }"; } This method generates some boilerplace and hands off to various other methods which return strings. The whole lot is then returned as a string and written to the output. The question(s) I have are: 1) Is there a nicer way to do this other than using large string blocks? We've considered using a StringBuilder or the Response Stream but it seems quite 'noisy'. Using string.format starts to become difficult to comprehend. 2) How would you go about unit testing this code? It seems a little amateur just doing a string comparison looking for particular output in the string. 3) What about actually testing the eventual JavaScript output? Thanks for your input!

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >