Search Results

Search found 20475 results on 819 pages for 'multiple repositories'.

Page 94/819 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • DB2Command ExecuteNonQuery Insert multiple rows problem

    - by DB2 Nubie
    I'm attempting to insert multiple rows into a DB2 database using C# code like this: string query = "INSERT INTO TESTDB2.RG_Table (V,E,L,N,Q,B,S,P) values" + "('lkjlkj', 'iouoiu', '2009-03-27 12:01:19', 'nnne', 'sdfdf', NULL, NULL, NULL)," + "('lkjlk2', 'iuoiu2', '2009-03-27 12:01:19', 'nnne2', 'sddf2', NULL, NULL, NULL)"; DB2Command cmd = new DB2Command(query, this.transactionConnection, this.transaction); cmd.ExecuteNonQuery(); If I stop building the query string after the first set of values is included it executes without an error. Attempting to load multiple values using this method results in the following error: Upload error : ERROR [42601] [IBM][DB2] SQL0104N An unexpected token "," was found following "". Expected tokens may include: "". SQLSTATE =42601 The SQL syntax matches that which I have read elsewhere, such as http://stackoverflow.com/questions/452859/inserting-multiple-rows-in-a-single-sql-query and IBM's documentation gives this example: cmd = conn.CreateCommand(); cmd.Transaction = trans; cmd.CommandText = "INSERT INTO company_a VALUES(5275, 'Sanders', 20, 'Mgr', 15, 18357.50), " + "(5265, 'Pernal', 20, 'Sales', NULL, 18171.25), " + "(5791, 'O''Brien', 38, 'Sales', 9, 18006.00)"; cmd.ExecuteNonQuery(); Can anyone explain what could account for this?

    Read the article

  • Call Multiple Stored Procedures with the Zend Framework

    - by Brian Fisher
    I'm using Zend Framework 1.7.2, MySQL and the MySQLi PDO adapter. I would like to call multiple stored procedures during a given action. I've found that on Windows there is a problem calling multiple stored procedures. If you try it you get the following error message: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. I found that to work around this issue I could just close the connection to the database after each call to a stored procedure: if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') { //If on windows close the connection $db->closeConnection(); } This has worked well for me, however, now I want to call multiple stored procedures wrapped in a transaction. Of course, closing the connection isn't an option in this situation, since it causes a rollback of the open transaction. Any ideas, how to fix this problem and/or work around the issue. More info about the work around Bug report about the problem

    Read the article

  • Is there any way to optimize this LINQ where clause that searches for multiple keywords on multiple

    - by Daniel T.
    I have a LINQ query that searches for multiple keywords on multiple columns. The intention is that the user can search for multiple keywords and it will search for the keywords on every property in my Media entity. Here is a simplified example: var result = repository.GetAll<Media>().Where(x => x.Title.Contains("Apples") || x.Description.Contains("Apples") || x.Tags.Contains("Apples") || x.Title.Contains("Oranges") || x.Description.Contains("Oranges") || x.Tags.Contains("Oranges") || x.Title.Contains("Pears") || x.Description.Contains("Pears") || x.Tags.Contains("Pears") ); In other words, I want to search for the keywords Apples, Oranges, and Pears on the columns Title, Description, and Tags. The outputted SQL looks like this: SELECT * FROM Media this_ WHERE (((((((( this_.Title like '%Apples%' or this_.Description like '%Apples%') or this_.Tags like '%Apples%') or this_.Title like '%Oranges%') or this_.Description like '%Oranges%') or this_.Tags like '%Oranges%') or this_.Title like '%Pears%') or this_.Description like '%Pears%') or this_.Tags like '%Pears%') Is this the most optimal SQL in this case? If not, how do I rewrite the LINQ query to create the most optimal SQL statement? I'm using SQLite for testing and SQL Server for actual deployment.

    Read the article

  • Multiple Application Support Directories for iPhone Simulator?

    - by Alex G
    I am developing an iPhone app with someone else. The app works fine for me, but he is running into a bug. We think this bug is related to the fact that he is getting multiple Application directories for this same app. In my ~/Library/Application Support/iPhone Simulator/User/Applications, I only have one folder at all times. He says that he will get 3 or 4 directories when he is only working on this one app. We think this is our problem because our bug has to do with displaying images that are stored in the app's Documents folder. Does anyone know why he is ending up with multiple directories or how to stop it? Edit: Here is the code for writing the image to a file: NSData *image = [NSData dataWithContentsOfURL:[NSURL URLWithString:[currentArticle articleImage]]]; NSArray *array = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *imagePath = [array objectAtIndex:0]; NSFileManager *NSFM = [NSFileManager defaultManager]; BOOL isDir = YES; if(![NSFM fileExistsAtPath:imagePath isDirectory:&isDir]) if(![NSFM createDirectoryAtPath:imagePath attributes:nil]) NSLog(@"error"); imagePath = [imagePath stringByAppendingFormat:@"/images"]; if(![NSFM fileExistsAtPath:imagePath isDirectory:&isDir]) if(![NSFM createDirectoryAtPath:imagePath attributes:nil]) NSLog(@"error"); imagePath = [imagePath stringByAppendingFormat:@"/%@.jpg", [currentArticle uniqueID]]; [image writeToFile:imagePath atomically:NO]; And here is the code for getting the path when I need the image: - (NSString *)imagePath { NSArray *array = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *imagePath = [array objectAtIndex:0]; return [imagePath stringByAppendingFormat:@"/images/%@.jpg", [self uniqueID]]; } The app works great for me, but my partner says that the images don't show up intermittently, and he notices that he gets multiple directories in his Applications folder.

    Read the article

  • Android Multiple Handlers Design Question

    - by Soumya Simanta
    This question is related to an existing question I asked. I though I'll ask a new question instead of replying back to the other question. Cannot "comment" on my previous question because of a word limit. Marc wrote - I've more than one Handlers in an Activity." Why? If you do not want a complicated handleMessage() method, then use post() (on Handler or View) to break the logic up into individual Runnables. Multiple Handlers makes me nervous. I'm new to Android. Is having multiple handlers in a single activity a bad design ? I'm new to Android. My question is - is having multiple handlers in a single activity a bad design ? Here is the sketch of my current implementation. I've a mapActivity that creates a data thread (a UDP socket that listens for data). My first handler is responsible for sending data from the data thread to the activity. On the map I've a bunch of "dynamic" markers that are refreshed frequently. Some of these markers are video markers i.e., if the user clicks a video marker, I add a ViewView that extends a android.opengl.GLSurfaceView to my map activity and display video on this new vide. I use my second handler to send information about the marker that the user tapped on ItemizedOverlay onTap(int index) method. The user can close the video view by tapping on the video view. I use my third handler for this. I would appreciate if people can tell me what's wrong with this approach and suggest better ways to implement this. Thanks.

    Read the article

  • Storing an object to use in multiple classes

    - by Aaron Sanders
    I am wondering the best way to store an object in memory that is used in a lot of classes throughout an application. Let me set up my problem for you: We have multiple databases, 1 per customer. We also have a master table and each row is detailed information about the databases such as database name, server IP it's located and a few config settings. I have an application that loops through those multiple databases and runs some updates on them. The settings I mentioned above are updated each loop iteration into memory. The application then runs through series of processes that include multiple classes using this data. The data never changes during the processes, only during the loop iteration. The variables are related to a customer, so I have them stored in a customer class. I suppose I could make all of the members shared or should I use a singleton for the customer class? I've never actually used a singleton, only read they are good in this type of situation. Are there better solutions to this type of scenario? Also, I could have plans for this application to be multithreaded later. Sorry if this is confusing. If you have questions, let me know and I will answer them. Thanks for your help.

    Read the article

  • How do I combine multiple changelog / commit entries in mercurial

    - by Kimvais
    I currently have a repository, where development work has been committed directly into the 'default' branch. I'd like to find a way to combine multiple small changesets into a single changeset, for merging the changes to child repositories. E.g in the default/main repo there are changesets something like: 12839: Fooed the bar 12838: Fixed blonking 12837: Trumped slamdunks ... 323: Started development for thingamajiks and I would like this to look like (in the child repos): 323: Added thingamajik functionality

    Read the article

  • Capistrano fails for multiple host deployments

    - by morris082
    I be at a loss here, and after scouring the seas (read: internet) for solutions I am left with none other than to hit up the stack. any help appreciated. I have capistrano running locally for deployments onto several different environments. (I'm on windows 7, fwiw). All was well until I needed to deploy to multiple :app servers during a single deployment. Usually I'm prompted for my ssh passphrase once when I call 'cap deploy'. I have ssh-agent running (git never pesters for my pass) but despite this Capistrano has always bugged me once each deployment. Regardless, it always worked when deploying to ONE host. Now, when I attempt to deploy to multiple servers at once, it asks for my passphrase what appears to be multiple times: (ips removed by ME) servers: ["redacted", "redacted"]<br /> Enter passphrase for ~/.ssh/id_rsa: Enter passphrase for ~/.ssh/id_rsa: So with the above I enter my passphrase but this doesn't work. It waits as little while, then spits out this error: connection failed for: <one of the server ips> (NoMethodError: undefined method `overwrite' for nil:NilClass) And that's the end of that. I can "passwordless" ssh into the servers I'm deploying on just fine. I'm pretty certain the ssh-agent is running since I can hit Git w/out entering my passphrase every time Using 'forward_agent' setting in cap deploy did not work. This is my role: role :app, "ip 1 removed", "ip 2 removed" If i set default_run_options[:max_hosts] = 1, it works OK but it asks for my passphrase for every single connection to each host I'm deploying to.. which ends up being a lot. Essentially I'm looking for any of the below (but not limited to): - "You're never going to fix that on windows" - "This is how you get REAL passwordless deployment in capistrano" - "Have you overlooked this setting/feature?" - "I have a rock that can fix anything, you may borrow it" Thanks!

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • Avoiding Multiple Dialog Calls with htaccess

    - by Jeffrey J Weimer
    OK, I'm new to this, so pardon if the question is already a FAQ. Searching multiple places still leaves me dumbfounded. I have a Web site generated with iWeb09/Mac hosting on an ISP. To secure certain pages, I am trying to set up .htaccess + .htpasswd files. The basic directory structure is ... Main index.html Images.html Images (some css, js stuff) Media Image01 Image01.jpeg ... Image02 Image02.jpeg ... I want to password protect access to the Images directory and all the files therein. The index.html file has a link to the Images.html file that contains the layout for the files in the Images directory. I have put a basic .htaccess file at the Main level that restricts access via ... <Files "Images.html"> AuthType Basic AuthName "Images" AuthUserFile /Main/.htpasswd AuthGroupFile /dev/null Require valid-user </Files> I have then created a valid .htpasswd file. All works at the start, however after the first call to set up the Images.html page, the secure login prompt is displayed multiple times, presumably once for every sub-sub-directory Images/Media/ImageXX (with multiple sub-directories, I just give up after two or three times). I have also tried placing the .htaccess file inside the Images directory with the same problem. Recommendations I have seen suggest a better convention is needed in the basic .htaccess file itself. Alternatively, perhaps a companion .htaccess is needed in the Images directory. So, how do I fix this problem? -- JJW

    Read the article

  • How does static code run with multiple threads?

    - by Krisc
    I was reading http://stackoverflow.com/questions/1511798/threading-from-within-a-class-with-static-and-non-static-methods and I am in a similar situation. I have a static method that pulls data from a resource and creates some runtime objects based on the data. static class Worker{ public static MyObject DoWork(string filename){ MyObject mo = new MyObject(); // ... does some work return mo; } } The method takes awhile (in this case it is reading 5-10mb files) and returns an object. I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code? Let's say I have something like this... class ThreadedWorker { public void Run() { Thread t = new Thread(OnRun); t.Start(); } void OnRun() { MyObject mo = Worker.DoWork("somefilename"); mo.WriteToConsole(); } } Does the static method run for each thread, allowing for parallel execution?

    Read the article

  • Multiple ajax request and progress bar

    - by hunt
    Hi, In a following piece of code i am create a progress bar and showing its progress as the ajax request get processed. i am faking the progress shown here just by adding 5 in to cnt counter variable after that i made a check when counter reach to 90. at this point if the request is not executed successfully then i will pause/disable the progress bar and whenever response come i will complete the whole progress bar with 100. now the problem is i want to add multiple progress bar as i am firing multiple ajax request. so following is the code to implement only for one request and one progress bar but i want it for more than one. as global variables are used over here for checking response and timer id so i don't know how well i can handle it for multiple request var cnt=0; var res=null; function getProgress(data){ res=data; } var i =0; $('#start').click(function(){ i = setInterval(function() { if(res!=null) { clearInterval(i); $("#pb1").progressbar( "option", "value", cnt=cnt+100 ); } var value = $("#pb1").progressbar("option", "value"); if(value >=90 && res==null){ $("#pb1").progressbar("option", "disable"); } else{ $("#pb1").progressbar( "option", "value", cnt=cnt+5 ); } },2500); $.ajax({ url: 'http://localhost/beta/demo.php', success: getProgress }); }); $("#pb1").progressbar({ value: 0 , change: function(event, ui) { if(res!=null) clearInterval(i); } });

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • problems selecting a mutliple select value from database in Rails

    - by Ramy
    From inside of a form_for in rails, I'm inserting multiple select values into the database, like this: <div class="new-partner-form"> <%= form_for [:admin, matching_profile.partner, matching_profile], :html => {:id => "edit_profile", :multipart => true} do |f| %> <%= f.submit "Submit", :class => "hidden" %> <div class="rounded-block quarter-wide radio-group"> <h4>Exclude customers from source:</h4> <%= f.select :source, User.select(:source).group(:source).order(:source).map {|u| [u.source,u.source]}, {:include_blank => false}, {:multiple => true} %> <%= f.error_message_on :source %> </div> I'm then trying to pull the value from the database like this: def does_not_contain_source(matching_profiles) Expression.select(matching_profiles, :source) do |keyword| Rails.logger.info("Keyword is : " + keyword) @customer_source_tokenizer ||= Tokenizer.new(User.select(:source).where("id = ?", self.owner_id).map {|u| u.source}[0]) #User.select("source").where("id = ?", self.owner_id).to_s) @customer_source_tokenizer.divergent?(keyword) end end but getting this: ExpressionErrors: Bad syntax: --- - "" - B - "" this is what the value is in the database but it seems to choke when i access it this way. What's the right way to do this?

    Read the article

  • Parse multiple named command line parameters

    - by scholzr
    I need to add the ability to a program to accept multiple named parameters when opening the program via the command line. i.e. program.exe /param1=value /param2=value and then be able to utilize these parameters as variables in the program. I have found a couple of ways to accomplish pieces of this, but can't seem to figure out how to put it all together. I have been able to pass one named parameter and recover it using the code below, and while I could duplicate it for every possible named parameter, I know that can't be the preffered way to do this. Dim inputArgument As String = "/input=" Dim inputName As String = "" For Each s As String In My.Application.CommandLineArgs If s.ToLower.StartsWith(inputArgument) Then inputName = s.Remove(0, inputArgument.Length) End If Next Alternatively, I can get multiple unnamed parameters from the command line using My.Application.CommandLineArgs But this requires that the parameters all be passed in the same order/format each time. I need to be able to pass a random subset of parameters each time. Ultimately, what I would like to be able to do, is separate each argument and value, and load it into a multidimentional array for later use. I know that I could find a way to do this by separating the string at the "=" and stripping the "/", but as I am somewhat new to this, I wanted to see if there was a "preffered" way for dealing with multiple named parameters?

    Read the article

  • How do common web frameworks (Django, Rails, Symfony, etc) handle multiple instances of the same plu

    - by Steven Wei
    Do any of the popular web frameworks solve this problem well? Here's an example: suppose you're running one of these web frameworks and you want to install a blog plugin. Except instead of a single blog, you need to run two separate instances of the blog plugin, and you want to keep them segregated. Or say you want to install multiple instances of a user authentication plugin, because you want to segregate your administrative users from your customer user accounts. Or say you want to install multiple instances of a wiki plugin for different parts of your site, or multiple instances of a comments plugin, or whatever else. It seems to me that at the basic level, each instance of plugin would need to be able to configured with a different set of database tables, and would need to be 'installed' at a different URL path. My experience is mostly with Django and Symfony, and I haven't seen a clean solution to this problem in either of them. They both tend to assume that each plugin (or app, in Django's case) is only ever going to be installed once. I'm curious if the Rails folks have figured out a clean solution to this problem, or any other framework authors (in any language). And if you were going to design a solution to this problem, what would it look like?

    Read the article

  • iPhone toolbar shared by multiple views

    - by codemonkey
    Another iPhone noob question. The app I'm building needs to show a shared custom UIToolbar for multiple views (and their subviews) within a UITabBarController framework. The contents of the custom toolbar are the same across all the views. I'd like to be able to design the custom toolbar as a xib and handle UI events from its own controller class (I'm assuming I can subclass UIToolbar to do so?). That way I could define IBOutlet & IBAction items, etc. Then I could associate this custom toolbar with eachs of the UITabBarController views (and their subviews). But I'm having trouble finding out whether that's possible - and if so, how to do it. In particular, I want to be able to push new views onto UINavigationControllers that are each associated with parent UITabBarController tabs. So, to summarize, I want a: custom toolbar shared by multiple views which are managed by multiple navigation controllers and the navigation controllers are associated with different tabs of a parent tab bar controller The tab bar controller itself is launched modally, though I don't believe that's relevant. Anyway, the tab bar controller is working, as are its child navigation controllers. I'm just having a little trouble figuring out how to persist the shared toolbar to the various subviews. I'd settle for a good clean way of implementing programmatically... though I'd prefer the flexibility of keeping the toolbar's visual design in a xib. Anyone have any suggestions?

    Read the article

  • Issue with Multiple Text Fields and SharedObject Storing

    - by user1662660
    I'm currently working on an AIR for iOS application in Flash CS6. I'm trying to store multiple pieces of data from various text inputs; i.e "name_txt", "number_txt" etc. I have the following code working for a local save file; import flash.events.Event; import flash.desktop.NativeApplication; import flash.events.Event; var n1:String = so.data.Number1; var so:SharedObject = SharedObject.getLocal("TravelPal"); emerg1.text = n1; emerg1.addEventListener(Event.CHANGE, updateEmerg1); function updateEmerg1 (e:Event):void { so.data.Number1 = emerg1.text; so.flush(); } NativeApplication.nativeApplication.addEventListener(Event.EXITING, onExit); function onExit(e:Event):void { so.flush(); } Now as soon as I create multiple text inputs and attempt to store them in my SharedObject, the whole system just falls apart. None of the text gets saved, even the previously working ones. I'm pretty new to ShardObject usage. What am I missing here? Is this a good way to go about storing multiple text inputs?

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • Doctrine: Unable to execute either CROSS JOIN or SELECT FROM Table1, Table2?

    - by ropstah
    Using Doctrine I'm trying to execute either a 1. CROSS JOIN statement or 2. a SELECT FROM Table1, Table2 statement. Both seem to fail. The CROSS JOIN does execute, however the results are just wrong compared to executing in Navicat. The multiple table SELECT doesn't event execute because Doctrine automatically tries to LEFT JOIN the second table. The cross join statement (this runs, however it doesn't include the joined records where the refClass User_Setting doesn't have a value): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u CROSS JOIN Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key)); And the select from multiple tables (this doesn't event run. It does not spot the many-many relationship between User and Setting in the first ->from() part and throws an exception: "User_Setting" with an alias of "us" in your query does not reference the parent component it is related to.): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u, Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key));

    Read the article

  • mysql - multiple where and search

    - by Shamil
    I'm trying to write a SQL query that satisfies multiple criteria. Of these, most are connected via a column, so joins are possible, however, some queries are such that I'd have to search additional tables for the information. What would be the least expensive and best way to do this? Let's say that we have a few tables. One table contains information such as sales information for a server: the salesperson, client id, service lease term, timestamps etc. It is possible that a client has multiple sales but with a different "service". I'd need to pick up all of the different ones. Another table has the quotes for the services, I'd need to pick some information out about this, whilst another, which could be joined to this one has some more information. Those tables are linked by a common client ID, so joins are possible, but I'd also need to search the first table for multiple instances of the client ID. Of course, I'd want to restrict the search to certain timestamps, which I can easily do as the timestamps are stored in MySQL format.

    Read the article

  • Need Multiple Sudoku Solutions

    - by user1567909
    I'm trying to output multiple sudoku solutions in my program. For example, when You enter this as input: 8..6..9.5.............2.31...7318.6.24.....73...........279.1..5...8..36..3...... .'s denote blank spaces. Numbers represent already-filled spaces. The output should be a sudoku solution like so: 814637925325149687796825314957318462241956873638274591462793158579481236183562749 However, I want to output multiple solutions. This would be all the solutions that should be printed: 814637925325149687796825314957318462241956873638274591462793158579481236183562749 814637925325941687796825314957318462241569873638472591462793158579184236183256749 834671925125839647796425318957318462241956873368247591682793154579184236413562789 834671925125839647796524318957318462241956873368247591682793154519482736473165289 834671925125839647796524318957318462241965873368247591682793154519482736473156289 But my program only prints out one solution. Below is my recursive solution to solving a sudoku solution bool sodoku::testTheNumber(sodoku *arr[9][9], int row, int column) { if(column == 9) { column = 0; row++; if(row == 9) return true; } if(arr[row][column]->number != 0) { return testTheNumber(arr, row, column+1); } for(int k = 1; k < 10; k++) { if(k == 10) { arr[row][column]->number = 0; return false; } if(rowIsValid(arr, k, row) && columnIsValid(arr, k, column) && boxIsValid(arr, k, row, column)) { arr[row][column]->number = k; if(testTheNumber(arr, row, column+1)==true) { return true; } arr[row][column]->number = 0; } } return false; } Could anyone help me come up with a way to print out multiple solutions? Thanks.

    Read the article

  • Authenticating Apache HTTPd against multiple LDAP servers with expired accounts

    - by Brian Bassett
    We're using mod_authnz_ldap and mod_authn_alias in Apache 2.2.9 (as shipped in Debian 5.0, 2.2.9-10+lenny7) to authenticate against multiple Active Directory domains for hosting a Subversion repository. Our current configuration is: # Turn up logging LogLevel debug # Define authentication providers <AuthnProviderAlias ldap alpha> AuthLDAPBindDN "CN=Subversion,OU=Service Accounts,O=Alpha" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://dc01.alpha:3268/?sAMAccountName?sub? </AuthnProviderAlias> <AuthnProviderAlias ldap beta> AuthLDAPBindDN "CN=LDAPAuth,OU=Service Accounts,O=Beta" AuthLDAPBindPassword [[REDACTED]] AuthLDAPURL ldap://ldap.beta:3268/?sAMAccountName?sub? </AuthnProviderAlias> # Subversion Repository <Location /svn> DAV svn SVNPath /opt/svn/repo AuthName "Subversion" AuthType Basic AuthBasicProvider alpha beta AuthzLDAPAuthoritative off AuthzSVNAccessFile /opt/svn/authz require valid-user </Location> We're encountering issues with users that have accounts in both Alpha and Beta, especially when their accounts in Alpha are expired (but still present; company policy is that the accounts live on for at a minimum of 1 year). For example, when the user x (which has en expired account in Alpha, and a valid account in Beta), the Apache error log reports the following: [Tue May 11 13:42:07 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14817] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:08 2010] [warn] [client 10.1.1.104] [14817] auth_ldap authenticate: user x authentication failed; URI /svn/ [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue May 11 13:42:08 2010] [error] [client 10.1.1.104] user x: authentication failure for "/svn/": Password Mismatch [Tue May 11 13:42:08 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ Attempting to authenticate as a non-existant user (nobodycool) results in the correct behavior of querying both LDAP servers: [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://dc01.alpha:3268/?sAMAccountName?sub? [Tue May 11 13:42:40 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:40 2010] [debug] mod_authnz_ldap.c(377): [client 10.1.1.104] [14815] auth_ldap authenticate: using URL ldap://ldap.beta:3268/?sAMAccountName?sub? [Tue May 11 13:42:44 2010] [warn] [client 10.1.1.104] [14815] auth_ldap authenticate: user nobodycool authentication failed; URI /svn/ [User not found][No such object] [Tue May 11 13:42:44 2010] [error] [client 10.1.1.104] user nobodycool not found: /svn/ [Tue May 11 13:42:44 2010] [debug] mod_deflate.c(615): [client 10.1.1.104] Zlib: Compressed 527 to 359 : URL /svn/ How do I configure Apache to correctly query Beta if it encounters an expired account in Alpha?

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >