Search Results

Search found 11226 results on 450 pages for 'reverse thinking'.

Page 403/450 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Passing XML data and user from coldfusion page to .NET page

    - by Mark Rullo
    I'd appreciate some input on this situation, I can't figure out the best way to do this. I have some data that's being prepared for me in a ColdFusion app and in an IFrame within the CF app we want to display some graphs (not strictly an image, it's an entire page) being generated on the .NET side of things. I'd like to pass XML data from the CF side to .NET as well as the user. On the .NET side I'm putting the data in a session so the user can sift through it without the need to have it re-queried and re-passed from CF. What I've tried: Generating XML with CF, putting it in a hidden form field, auto-submitting (with JS) a the form to the .NET side. The issue I'm having with this approach is the encoding being done on the form post. The data has entries like <entry data="hello &amp; goodbye">. It's an issue because it's being URL encdeded, Posted, and when I get it on the .NET side I get <entry data="hello & goodbye"> which isn't properly formed XML. What I'd like to avoid: An intermediary DB approach (dropping the data in a DB on CF, picking it up with .NET) I'd like to only display what is passed to the page. I have security concerns with the data, it's very sensitive. Passing the data to a webservice, returning a GUID, forwarding the user with a URL Parameter to access the passed in data. I think that'd be risky if someone happened on a link to that data. I can't take that risk. I was thinking of passing the data with JSON, but I'm very unfamiliar with it. Thoughts? Thanks for your time folks.

    Read the article

  • Developers: How does BitLocker affect performance?

    - by Chris
    I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think. So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad? My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).

    Read the article

  • Why is my rspec test failing?

    - by Justin Meltzer
    Here's the test: describe "admin attribute" do before(:each) do @user = User.create!(@attr) end it "should respond to admin" do @user.should respond_to(:admin) end it "should not be an admin by default" do @user.should_not be_admin end it "should be convertible to an admin" do @user.toggle!(:admin) @user.should be_admin end end Here's the error: 1) User password encryption admin attribute should respond to admin Failure/Error: @user = User.create!(@attr) ActiveRecord::RecordInvalid: Validation failed: Email has already been taken # ./spec/models/user_spec.rb:128 I'm thinking the error might be somewhere in my data populator code: require 'faker' namespace :db do desc "Fill database with sample data" task :populate => :environment do Rake::Task['db:reset'].invoke admin = User.create!(:name => "Example User", :email => "[email protected]", :password => "foobar", :password_confirmation => "foobar") admin.toggle!(:admin) 99.times do |n| name = Faker::Name.name email = "example-#{n+1}@railstutorial.org" password = "password" User.create!(:name => name, :email => email, :password => password, :password_confirmation => password) end end end Please let me know if I should reproduce any more of my code. UPDATE: Here's where @attr is defined, at the top of the user_spec.rb file: require 'spec_helper' describe User do before(:each) do @attr = { :name => "Example User", :email => "[email protected]", :password => "foobar", :password_confirmation => "foobar" } end

    Read the article

  • Best way to implement some type of ITaggable interface

    - by Jack
    I've got a program I'm creating that reports on another certain programs backup xml files. I've gotten to the point where I need to implement some type of ITaggable interface - but am unsure how to go about it code wise. My idea is that each item (BackupClient, BackupVersion, and BackupFile) should implement an ITaggable interface for highlighting old, out of date, or non-existent files in their HTML or Excel report. The user will be able to specify tags in the settings. My question is this, how can a user dynamically specify a "tag" such as File Date 3 days old? - Background Color = Red. Actually I guess my question is more, how can I, the programmer, implement this dynamically? I was thinking Expression trees, but am unsure this is the way to go as I havn't studied them much. I know my ITaggable interface would have methods such as AddTag(T tag), RemoveTag(T tag), but what exactly specifies the criteria for the tag to be added? I realize this may be subjective, and can be marked as wiki if need be, but I truly am stuck. Any input would be greatly helpful!

    Read the article

  • Counting character count in Access database column ins SQL

    - by jzr
    Good Evening. My problem is possibly very easy, I just have spent some time researching now and probably have a brain lock and unable to solve this, help would be much appreciated. database structure: col1 col2 col3 col4 ==================== 1233+4566+ABCD+CDEF 1233+4566+ACD1+CDEF 1233+4566+D1AF+CDEF I need to count character count in col3, wanted result in from the previous table would be: char count =========== A 3 B 1 C 2 D 3 F 1 1 2 is this possible to achieve by using SQL only? at the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that. this is my query at the moment: PARAMETERS X Long; SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount FROM TEST GROUP BY Mid(TABLE.col3,X,1) HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1)); ideas and help are much appreciated, as being said this is probably very for some of your guys, I don't usually work with access and SQL. Thanks.

    Read the article

  • Learning Java and logic using debugger. Did I cheat?

    - by centr0
    After a break from coding in general, my way of thinking logically faded (as if it was there to begin with...). I'm no master programmer. Intermediate at best. I decided to see if i can write an algorithm to print out the fibonacci sequence in Java. I got really frustrated because it was something so simple, and used the debugger to see what was going on with my variables. solved it in less than a minute with the help of the debugger. Is this cheating? When I read code either from a book or someone else's, I now find that it takes me a little more time to understand. If the alghorithm is complex (to me) i end up writing notes as to whats going on in the loop. A primitive debugger if you will. When you other programmers read code, do you also need to write things down as to whats the code doing? Or are you a genius and and just retain it?

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Turn based synchronization between threads

    - by Amarus
    I'm trying to find a way to synchronize multiple threads having the following conditions: * There are two types of threads: 1. A single "cyclic" thread executing an infinite loop to do cyclic calculations 2. Multiple short-lived threads not started by the main thread * The cyclic thread has a sleep duration between each cycle/loop iteration * The other threads are allowed execute during the inter-cycle sleep of the cyclic thread: - Any other thread that attempts to execute during an active cycle should be blocked - The cyclic thread will wait until all other threads that are already executing to be finished Here's a basic example of what I was thinking of doing: // Somewhere in the code: ManualResetEvent manualResetEvent = new ManualResetEvent(true); // Allow Externally call CountdownEvent countdownEvent = new CountdownEvent(1); // Can't AddCount a CountdownEvent with CurrentCount = 0 void ExternallyCalled() { manualResetEvent.WaitOne(); // Wait until CyclicCalculations is having its beauty sleep countdownEvent.AddCount(); // Notify CyclicCalculations that it should wait for this method call to finish before starting the next cycle Thread.Sleep(1000); // TODO: Replace with actual method logic countdownEvent.Signal(); // Notify CyclicCalculations that this call is finished } void CyclicCalculations() { while (!stopCyclicCalculations) { manualResetEvent.Reset(); // Block all incoming calls to ExternallyCalled from this point forward countdownEvent.Signal(); // Dirty workaround for the issue with AddCount and CurrentCount = 0 countdownEvent.Wait(); // Wait until all of the already executing calls to ExternallyCalled are finished countdownEvent.Reset(); // Reset the CountdownEvent for next cycle. Thread.Sleep(2000); // TODO: Replace with actual method logic manualResetEvent.Set(); // Unblock all threads executing ExternallyCalled Thread.Sleep(1000); // Inter-cycles delay } } Obviously, this doesn't work. There's no guarantee that there won't be any threads executing ExternallyCalled that are in between manualResetEvent.WaitOne(); and countdownEvent.AddCount(); at the time the main thread gets released by the CountdownEvent. I can't figure out a simple way of doing what I'm after, and almost everything that I've found after a lengthy search is related to producer/consumer synchronization which I can't apply here.

    Read the article

  • SharePoint 2007 - Content deployment and swapping content database

    - by Mel Lota
    Hi all, I'm currently working on a SharePoint 2007 site which is setup to allow clients to author content on a staging server and then this is automatically pushed up to the live environment via content deployment. The content deployment is setup in the 'Content deployment jobs and paths' in central admin. Now the problem I've got is that it seems that historically there have been a mixture of full and incremental deployments done to the live site collection which according to Stefan Goßner's best practices post (http://blogs.technet.com/stefan_gossner/pages/content-deployment-best-practices.aspx) is a bad idea due to the fact that things soon become out of sync. It's gotten to the point where the content deployment has just stopped working and incremental or full deployments are throwing errors in the logs. What I'm thinking is that I probably need to perform a full content deployment to an empty site collection and then somehow switch the new clean site collection with the current live one. I was wondering if anybody has any experience with this and could provide any pointers, I'm currently investigating the feasibility of performing the clean content deployment and then switching the live content database with the new one, however in my tests I've found that as soon as I switch content databases, the incremental deployment still fails. Any help much appreciated. (Note: I did post this on SharePoint Overflow as well, but thought I'd put it on here in case anybody else has any ideas) Cheers

    Read the article

  • Context path routing in Tomcat ( service swapping )

    - by jojovilco
    Here is what I would like to achieve: I have a web service A which I want to be able to deploy side by side with other web services of type A - different version(s). For now I assume 2 instances side by side. I need it because the service has a warm up stage, which takes some time to build up stuff from DB and only after it is ready it can start serving requests ... I was thinking to deploy to Tomcat context paths like: "/ServiceA-1.0", "/ServiceA-2.0" and then have a "virtual" context like "/ServiceA" which will point to the desired physical service e.g. "/ServiceA-1.0". So external world will know about ServiceA, but internally, my ServiceA related stack would know about versioned ServiceA url ( there are more components involved but only ServiceA is serving outer world ). When new service is ready, I would just reconfigure the "virtual" context to point to a new service. So far, I was not able to find out how to do this with Tomcat and starting to tkink it is not possible. I found suggestions to place Apache Server in front of Tomcat and do the routing there, but I do not want to enroll another piece of software unless necessary. My questions are: - is this kind of a "virtual" context and routing possible to do with Tomcat? - any other options, wisdom and lessons learned how to achieve this kind of service swapping scenario? Best, Jozef

    Read the article

  • alternative in .Net to building Tables/TR/TD manually?

    - by egrunin
    I've got a pile of code here that looks like this: if (stateTax > 0) { tr = new TableRow(); tbl.Rows.Add(tr); td = new TableCell(); td.CssClass = "stdLabel"; td.Text = "NY State Tax"; tr.Cells.Add(td); td = new TableCell(); td.Text = string.Format("{0:C}", stateTax); td.CssClass = "justRight"; tr.Cells.Add(td); } This is a horrible hash of data and layout, but creating a special class or control every time something like this comes up seems almost as bad. What I usually do is write a helper function, which is concise but ugly: if (stateTax > 0) MakeRow(tbl, "NY State Tax", "stdLabel", string.Format("{0:C}", stateTax), "justRight"); And now I'm thinking: haven't I done this 100 times before? Isn't there a more modern way? Just to be explicit: it's a whole list of miscellaneous labels and values. They don't come from any particular data source, and the rules for when rows should be suppressed vary. This example has only two columns, but I have other places with more.

    Read the article

  • Generic Func<> as parameter to base method

    - by WestDiscGolf
    I might be losing the plot, but I hope someone can point me in the right direction. What am I trying to do? I'm trying to write some base methods which take Func< and Action so that these methods handle all of the exception handling etc. so its not repeated all over the place but allow the derived classes to specify what actions it wants to execute. So far this is the base class. public abstract class ServiceBase<T> { protected T Settings { get; set; } protected ServiceBase(T setting) { Settings = setting; } public void ExecAction(Action action) { try { action(); } catch (Exception exception) { throw new Exception(exception.Message); } } public TResult ExecFunc<T1, T2, T3, TResult>(Func<T1, T2, T3, TResult> function) { try { /* what goes here?! */ } catch (Exception exception) { throw new Exception(exception.Message); } } } I want to execute an Action in the following way in the derived class (this seems to work): public void Delete(string application, string key) { ExecAction(() => Settings.Delete(application, key)); } And I want to execute a Func in a similar way in the derived class but for the life of me I can't seem to workout what to put in the base class. I want to be able to call it in the following way (if possible): public object Get(string application, string key, int? expiration) { return ExecFunc(() => Settings.Get(application, key, expiration)); } Am I thinking too crazy or is this possible? Thanks in advance for all the help.

    Read the article

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Python "callable" attribute (pseudo-property)

    - by mgilson
    In python, I can alter the state of an instance by directly assigning to attributes, or by making method calls which alter the state of the attributes: foo.thing = 'baz' or: foo.thing('baz') Is there a nice way to create a class which would accept both of the above forms which scales to large numbers of attributes that behave this way? (Shortly, I'll show an example of an implementation that I don't particularly like.) If you're thinking that this is a stupid API, let me know, but perhaps a more concrete example is in order. Say I have a Document class. Document could have an attribute title. However, title may want to have some state as well (font,fontsize,justification,...), but the average user might be happy enough just setting the title to a string and being done with it ... One way to accomplish this would be to: class Title(object): def __init__(self,text,font='times',size=12): self.text = text self.font = font self.size = size def __call__(self,*text,**kwargs): if(text): self.text = text[0] for k,v in kwargs.items(): setattr(self,k,v) def __str__(self): return '<title font={font}, size={size}>{text}</title>'.format(text=self.text,size=self.size,font=self.font) class Document(object): _special_attr = set(['title']) def __setattr__(self,k,v): if k in self._special_attr and hasattr(self,k): getattr(self,k)(v) else: object.__setattr__(self,k,v) def __init__(self,text="",title=""): self.title = Title(title) self.text = text def __str__(self): return str(self.title)+'<body>'+self.text+'</body>' Now I can use this as follows: doc = Document() doc.title = "Hello World" print (str(doc)) doc.title("Goodbye World",font="Helvetica") print (str(doc)) This implementation seems a little messy though (with __special_attr). Maybe that's because this is a messed up API. I'm not sure. Is there a better way to do this? Or did I leave the beaten path a little too far on this one? I realize I could use @property for this as well, but that wouldn't scale well at all if I had more than just one attribute which is to behave this way -- I'd need to write a getter and setter for each, yuck.

    Read the article

  • Finding 'free' times in MySQL

    - by James Inman
    Hi, I've got a table as follows: mysql> DESCRIBE student_lectures; +------------------+----------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+----------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | course_module_id | int(11) | YES | MUL | NULL | | | day | int(11) | YES | | NULL | | | start | datetime | YES | | NULL | | | end | datetime | YES | | NULL | | | cancelled_at | datetime | YES | | NULL | | | lecture_type_id | int(11) | YES | | NULL | | | lecture_id | int(11) | YES | | NULL | | | student_id | int(11) | YES | | NULL | | | created_at | datetime | YES | | NULL | | | updated_at | datetime | YES | | NULL | | +------------------+----------+------+-----+---------+----------------+ I'm essentially wanting to find times when a lecture doesn't happen - so to do this I'm thinking a query to group overlapping lectures together (so, for example, 9am-10am and 10am-11am lectures will be shown as a single 9am-11am lecture). There may be more than two lectures back-to-back. I've currently got this: SELECT l.start, l2.end FROM student_lectures l LEFT JOIN student_lectures l2 ON ( l2.start = l.end ) WHERE l.student_id = 1 AND l.start >= '2010-04-26 09:00:00' AND l.end <= '2010-04-30 19:00:00' AND l2.end IS NOT NULL AND l2.end != l.start GROUP BY l.start, l2.end ORDER BY l.start, l2.start Which returns: +---------------------+---------------------+ | start | end | +---------------------+---------------------+ | 2010-04-26 09:00:00 | 2010-04-26 11:00:00 | | 2010-04-26 10:00:00 | 2010-04-26 12:00:00 | | 2010-04-26 10:00:00 | 2010-04-26 13:00:00 | | 2010-04-26 13:15:00 | 2010-04-26 16:15:00 | | 2010-04-26 14:15:00 | 2010-04-26 16:15:00 | | 2010-04-26 15:15:00 | 2010-04-26 17:15:00 | | 2010-04-26 16:15:00 | 2010-04-26 18:15:00 | ...etc... The output I'm looking for from this would be: +---------------------+---------------------+ | start | end | +---------------------+---------------------+ | 2010-04-26 09:00:00 | 2010-04-26 13:00:00 | | 2010-04-26 13:15:00 | 2010-04-26 18:15:00 | Any help appreciated, thanks!

    Read the article

  • Array Assignment

    - by Mahesh
    Let me explain with an example - #include <iostream> void foo( int a[2], int b[2] ) // I understand that, compiler doesn't bother about the // array index and converts them to int *a, int *b { a = b ; // At this point, how ever assignment operation is valid. } int main() { int a[] = { 1,2 }; int b[] = { 3,4 }; foo( a, b ); a = b; // Why is this invalid here. return 0; } Is it because, array decays to a pointer when passed to a function foo(..), assignment operation is possible. And in main, is it because they are of type int[] which invalidates the assignment operation. Doesn't a,b in both the cases mean the same ? Thanks. Edit 1: When I do it in a function foo, it's assigning the b's starting element location to a. So, thinking in terms of it, what made the language developers not do the same in main(). Want to know the reason.

    Read the article

  • Specifying a DLL reference

    - by Jesse
    I'm having trouble setting the path to a DLL that is not in the same directory as the executable. I have a reference to dllA.dll. At present, everything is just copied into the same directory and all is well; however, I need to move the executable to another directory while still referencing the DLL in the original directory. So, it's setup like: C:\Original\Dir program.exe dllA.dll dllB.dll dllC.dll But I need to have it setup like: C:\New\Dir program.exe dllB.dll dllC.dl Such that it is still able to reference dllA.dll in C:\Original\dir I tried the following, but to no avail: Set the "Copy Local" value to false for dllA.dll because I want it to be referenced in its original location. Under "Tools Options Projects and Solutions VC++ Directories" I have added the path to "C:\Original\Dir" Added "C:\Original\Dir" to both the PATH and LIB environment variables At runtime, it informs me that it cannot locate dllA.dll Maybe the above steps I took only matter at compile time? I was able to find this http://stackoverflow.com/questions/1382704/c-specifying-a-location-for-dll-reference But I was thinking that my above method should've worked. Any ideas?

    Read the article

  • How to check if a thread is busy in C#?

    - by Sam
    I have a Windows Forms UI running on a thread, Thread1. I have another thread, Thread2, that gets tons of data via external events that needs to update the Windows UI. (It actually updates multiple UI threads.) I have a third thread, Thread3, that I use as a buffer thread between Thread1 and Thread2 so that Thread2 can continue to update other threads (via the same method). My buffer thread, Thread3, looks like this: public class ThreadBuffer { public ThreadBuffer(frmUI form, CustomArgs e) { form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); } } What I would like to do is for my ThreadBuffer to check whether my form is currently busy doing previous updates. If it is, I'd like for it to wait until it frees up and then invoke the UpdateUI(e). I was thinking about either: a) //PseudoCode while(form==busy) { // Do nothing; } form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); How would I check the form==busy? Also, I am not sure that this is a good approach. b) Create an event in form1 that will notify the ThreadBuffer that it is ready to process. // psuedocode List<CustomArgs> elist = new List<CustomArgs>(); public ThreadBuffer(frmUI form, CustomArgs e) { from.OnFreedUp += from_OnFreedUp(); elist.Add(e); } private form_OnFreedUp() { if (elist.count == 0) return; form.Invoke((MethodInvoker)delegate { form.UpdateUI(elist[0]); }); elist.Remove(elist[0]); } In this case, how would I write an event that will notify that the form is free? and c) an other ideas?

    Read the article

  • saving mySql row checkpoint in table ?

    - by Keet
    hello, I am having a wee problem, and I am sure there is a more convenient/simpler way to achieve the solution, but all searches are throw in up a blanks at the moment ! I have a mysql db that is regularly updated by php page [ via a cron job ] this adds or deletes entries as appropriate. My issue is that I also need to check if any details [ie the phone number or similar] for the entry have changed, but doing this at every call is not possible [ not only does is seem to me to be overkill, but I am restricted by a 3rd party api call limit] Plus this is not critical info. So I was thinking it might be best to just check one entry per page call, and iterate through the rows/entires with each successive page call. What would be the best way of doing this, ie keeping track of which entry/row in the table that the should be checked next? I have 2 ideas of how to implement this: 1 ) The id of current row could be save to a file on the server [ surely not the best way] 2) an extra boolean field [check] is add to the table, set to True on the first entry and false to all other. Then on each page call it; finds 'where check = TRUE' runs the update check on this row, 'set check = FALSE' 'set [the next row] check = TRUE' Si this the best way to do this, or does anyone have any better sugestion ? thanks in advance ! .k PS sorry about the title

    Read the article

  • Starting a personal reuasable code repository.

    - by Rob Stevenson-Leggett
    Hi, I've been meaning to start a library of reusable code snippets for a while and never seem to get round to it. At the moment I just tend to have some transient classes/files that I drag out of old projects. I think my main problems are: Where to start. What structure should my repository take? Should it be a compiled library (where appropriate) or just classes/files I can drop into any project? Or a library project that can be included? What are the licencing implications of that? In my experience, a built/minified library will quickly become out of date and the source will get lost. So I'm leaning towards source that I can export from SVN and include in any project. Intellectual property. I am employeed, so a lot of the code I write is not my IP. How can I ensure that I don't give my own IP away using it on projects in work and at home? I'm thinking the best way would be to licence my library with an open source licence and make sure I only add to it in my own time using my own equipment and therefore making sure that if I use it in a work project the same rules apply as if I was using a third party library. I write in many different languages and often would require two or more parts of this library. Should I look at implementing a few template projects and a core project for each of my chosen reusable components and languages? Has anyone else got this sort of library and how do you organise and update it?

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Is there a way to identify that a file has been modified and moved?

    - by Eric
    I'm writing an application that catalogs files, and attributes them with extra meta data through separate "side-car" files. If changes to the files are made through my program then it is able to keep everything in sync between them and their corresponding meta data files. However, I'm trying to figure out a way to deal with someone modifying the files manually while my program is not running. When my program starts up it scans the file system and compares the files it finds to it's previous record of what files it remembers being there. It's fairly straight forward to update after a file has been deleted or added. However, if a file was moved or renamed then my program sees that as the old file being deleted, and the new file being added. Yet I don't want to loose the association between the file and its metadata. I was thinking I could store a hash from each file so I could check to see if newly found files were really previously known files that had been moved or renamed. However, if the file is both moved/renamed and modified then the hash would not match either. So is there some other unique identifier of a file that I can track which stays with it even after it is renamed, moved, or modified?

    Read the article

  • what's a good technique for building and running many similar unit tests?

    - by jcollum
    I have a test setup where I have many very similar unit tests that I need to run. For example, there are about 40 stored procedures that need to be checked for existence in the target environment. However I'd like all the tests to be grouped by their business unit. So there'd be 40 instances of a very similar TestMethod in 40 separate classes. Kinda lame. One other thing: each group of tests need to be in their own solution. So Business Unit A will have a solution called Tests.BusinessUnitA. I'm thinking that I can set this all up by passing a configuration object (with the name of the stored proc to check, among other things) to a TestRunner class. The problem is that I'm losing the atomicity of my unit tests. I wouldn't be able to run just one of the tests, I'd have to run all the tests in the TestRunner class. This is what the code looks like at this time. Sure, it's nice and compact, but if Test 8 fails, I have no way of running just Test 8. TestRunner runner = new TestRunner(config, this.TestContext); var runnerType = typeof(TestRunner); var methods = runnerType.GetMethods() .Where(x => x.GetCustomAttributes(typeof(TestMethodAttribute), false) .Count() > 0).ToArray(); foreach (var method in methods) { method.Invoke(runner, null); } So I'm looking for suggestions for making a group of unit tests that take in a configuration object but won't require me to generate many many TestMethods. This looks like it might require code-generation, but I'd like to solve it without that.

    Read the article

  • What are the benefits of a classical structure over a prototyple one?

    - by Rixius
    I have only recently started programming significantly, and being completely self-taught, I unfortunately don't have the benefits of a detailed Computer science course. I've been reading a lot about JavaScript lately, and I'm trying to find the benefit in classes over the prototype nature of JavaScript. The question seems to be drawn down the middle of which one is better, and I want to see the classical side of it. When I look at the prototype example: var inst_a = { "X": 123, "Y": 321, add: function () { return this.X+this.Y; } }; document.write(inst_a.add()); And then the classical version function A(x,y){ this.X = x; this.Y = y; this.add = function(){ return this.X+this.Y; }; }; var inst_a = new A(123,321); document.write(inst_a.add()); I begun thinking about this because I'm looking at the new ecmascript revision 5 and a lot of people seem up in arms that they didn't add a Class system.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >