Search Results

Search found 6030 results on 242 pages for 'exists'.

Page 176/242 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • how to group data in a list c#

    - by prince23
    hi, i need to group data of a list in c# ex: i have a data like this in a list c# i have a class called information.cs with these properties name,school, parent ex data name school parent kumar fes All manju fes kumar anu frank kumar anitha jss All rohit frank manju anill vijaya manju vani jss kumar soumya jss kumar madhu jss rohit shiva jss rohit vanitha jss anitha anu jss anitha now taking this as an input i wanted the output to be formated with a Hierarchical data when parent is all means it is the topmost level kumar fes All. what i need to do here is i need to create an object[0] and then check in list whether kumar exists as a parent in the list if it exista then add those items as under the object[0] as a parent i need to create one more oject under **manju fes kumar anu frank kumar** what i wanted do here is iterate through the list anD then check the parent level based on name school parent kumar fes All -->obj[0] manju fes kumar -->obj1[0] anu frank kumar -->obj1[1] for obj1-- obj[0] will be parent like this i need to genarte a list or observation class anitha jss All-->obj[1] vanitha jss anitha -->obj1[0] anu jss vanitha -->obj2[0] here obj2[0]--obj1[0]--obj[1] will be an parent like this i need to create a list or an observationclass hope my Question is clear what i am trying ask you people. i wanted to know how i can create an observationclass any help would be really great thanks prince hope my question is clear

    Read the article

  • PHP session_write_close() causes empty response

    - by Xeoncross
    When using session_write_close() in a shutdown function at the end of my script - PHP just dies. There is no error logged, response headers (firebug), or data (even whitespace!) returned. I have full PHP error reporting on with STRICT enabled and PHP 5.2.1. My guess is that since session_write_close() is being called after shutdown - some fatal error is being encountered that crashes PHP before it has a chance to send the output or log anything. This only happens on the logout page where I first: ... //If there is no session to delete (not started) if ( ! session_id()) { return; } // Get the session name $name = session_name(); // Delete the session cookie (if exists) if ( ! empty($_COOKIE[$name])) { //Get the current cookie config $params = session_get_cookie_params(); // Delete the cookie from globals unset($_COOKIE[$name], $_SESSION); //Delete the cookie on the user_agent setcookie($name, '', time()-43200, $params['path'], $params['domain'], $params['secure']); } // Destroy the session session_destroy(); ... then 2) do some more stuff 3) issue a redirect and 4) finally, after the whole page is done the register_shutdown_function(); I placed earlier runs and calls session_write_close() which saves the session to the database. The end. Since this blank response only occurs on logout I'm guessing that I'm not restarting the session properly which is causing session_write_close() to die fatally at the end of the script.

    Read the article

  • Bulk Copy from one server to another

    - by Joseph
    Hi All, I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy. 1. I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved. 2. If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file. Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? If I need to overwrite, how to do it? Thanks a lot in advance. ~Joseph

    Read the article

  • ant deploy to oc4j problem

    - by senzacionale
    BUILD FAILED C:\Projekti\Projekt ANT\build.xml:337: Problem: failed to create task or type antlib:oracle:deploy Cause: The name is undefined. Action: Check the spelling. Action: Check that any custom tasks/types have been declared. Action: Check that any / declarations have taken place. No types or tasks have been defined in this namespace yet This appears to be an antlib declaration. Action: Check that the implementing library exists in one of: -C:\Projekti\Apache ANT\apache-ant-1.8.1\bin..\lib -C:\Documents and Settings\MitjaG.ant\lib -a directory added on the command line with the -lib argument CODE: <target name="deploy" depends="init, ear"> <oracle:deploy moduletype="ear" host="${oc4j.host}" port="${oc4j.admin.port}" userid="${oc4j.admin.user}" password="${oc4j.admin.password}" file="${dest.dir}/${package.name}/${view.dir}/${deploy.dir}/${ear.file}" deploymentname="${app.name}" logfile="${log.dir}/deploy-ear.log"/> <oracle:bindWebApp host="${oc4j.host}" port="${oc4j.admin.port}" userid="${oc4j.admin.user}" password="${oc4j.admin.password}" deploymentname="${app.name}" webmodule="${web.name}" websitename="${oc4j.binding.module}" contextroot="/${app.name}" /> </target> I google and search for whole day but i can not find solution. There is no good docs for oc4j and ant except this from 2005: http://www.it-eye.nl/weblog/2005/08/15/how-to-using-ant-to-deploy-to-oc4j-dp4/

    Read the article

  • Java / Groovy Socket - Detecting the socket being closed in a non-blocking way

    - by John Arrowwood
    I'm trying to create a small HTTP proxy that can re-write the request/headers as needed to suit my requirements. If one already exists, please, point me to it. Otherwise... I've written something that ALMOST works. It can do the proxy function, but not the re-write (yet). Problem is, I can't detect when the remote socket has been closed down without doing a blocking read. It is CRITICAL for the functionality of this thing that it be able to detect the socket being closed without blocking. I have SCOURED the Java API documentation, and I can't find ANY indication that it is even possible. Here's what I have: while ( this.inbound.isConnected() && this.outbound.isConnected() ) { try { while ( ( available = readFromClient.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromClient.read( buffer, 0, available ) writeToServer.write( buffer, 0, bytesRead ) } while ( ( available = readFromServer.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromServer.read( buffer, 0, available ) writeToClient.write( buffer, 0, bytesRead ) } } catch (e) { print e } println "Connected: " + this.inbound.isConnected() println "Bound: " + this.inbound.isBound() println "InputShutdown: " + this.inbound.isInputShutdown() println "OutputShutdown: " + this.inbound.isOutputShutdown() print "\n"; Thread.sleep( 10 ) } The tests for the socket being closed never indicate that the socket was closed. And, as I mentioned, I can't find ANY examples of how to detect the 'END OF FILE' condition on the stream without doing a blocking read. There HAS to be a way. Does anyone here know what it is?

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • php foreach question

    - by user295189
    I have the following code $oldID=-1; $column=0; foreach($pv->rawData as $data){ if ($oldID!= $data->relativeTypeID){ $oldID= $data->relativeTypeID; $column++; $row=1; } echo "Row: ".$row.": Column: ".$column.": ID".$data->relativeTypeID."<br>"; //if exists a description if($data->description){ //insert here in the array $pv->results[$data->relativeTypeID][$row][0]= $data->relation; $pv->results[$data->relativeTypeID][$row][1]= ''; $pv->results[$data->relativeTypeID][$row][2] =''; $pv->results[$data->relativeTypeID][$row][3] = ''; $row++; } } this generates this output Row: 1: Column: 1: ID1 Row: 2: Column: 1: ID1 Row: 1: Column: 2: ID2 Row: 2: Column: 2: ID2 Row: 3: Column: 2: ID2 Row: 4: Column: 2: ID2 Row: 5: Column: 2: ID2 Row: 6: Column: 2: ID2 Row: 7: Column: 2: ID2 Row: 8: Column: 2: ID2 Row: 9: Column: 2: ID2 Row: 10: Column: 2: ID2 Row: 11: Column: 2: ID2 Row: 1: Column: 3: ID3 Row: 1: Column: 4: ID4 Row: 1: Column: 5: ID8 Row: 2: Column: 5: ID8 Row: 3: Column: 5: ID8 Row: 1: Column: 6: ID10 Row: 2: Column: 6: ID10 Row: 3: Column: 6: ID10 Row: 4: Column: 6: ID10 ... .. what I want it to do is to stop at the top 4 columns so I want an output like this Row: 1: Column: 1: ID1 Row: 2: Column: 1: ID1 Row: 1: Column: 2: ID2 Row: 2: Column: 2: ID2 Row: 3: Column: 2: ID2 Row: 4: Column: 2: ID2 Row: 5: Column: 2: ID2 Row: 6: Column: 2: ID2 Row: 7: Column: 2: ID2 Row: 8: Column: 2: ID2 Row: 9: Column: 2: ID2 Row: 10: Column: 2: ID2 Row: 11: Column: 2: ID2 Row: 1: Column: 3: ID3 Row: 1: Column: 4: ID4 as you can see it stopped at column 4. Thanks

    Read the article

  • ASP.Net: Writing chunks of file..HTTP File upload with resume..

    - by Manish
    Please refer to question: http://stackoverflow.com/questions/2062852/resume-in-upload-file-control Now to find a solution for the above question, I want to work on it and develop a user control which can resume a HTTP File Upload process. For this, I'm going to create a temporary file on server until the upload is complete. Once the uploading is done completely then will just save it to the desired location. The procedure I have thought of is: Create a temporary file with a unique name (may be GUID) on the server. Read a chunk of file and append it to this temp file on the server. Continue step 1 to 3 until EOF is reached. Now if the connection is lost or user clicks on pause button, the writing of file is stopped and the temporary file is not deleted. Clicking on resume again or uploading the same file again will check on the server if the file exists and user whether to resume or to overwrite. (Not sure how to check if it's the same file. Also, how to step sending the chunks from client to server.) Clicking on resume will start from where it is required to be uploaded and will append that to the file on the server. (Again not sure how to do this.) Questions: Are these steps correct to achieve the goal? Or need some modifications? I'm not sure how to implement all these steps. :-( Need ideas, links... Any help appreciated...

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • JPA/JDO entity to XML XSD generator.

    - by h2g2java
    I am using JDO or JPA on GAE plugin in Eclipse. I am using smartgwt datasource, accepting an xsd. I would like to be educated how to generate an XSD from my jdo/jpa entity, vice versa. Is there a tool to do it? While datanucleas does all its magic enhancing in the Eclipse background, would I be able to somehow operate in a mode that would generate XSDs for me? Can Hibernate operate in an offline mode, to solely help me generate XSDs which I could use in GWT without deploying hibernate with my web-app? Can Hibernate even be capable of generating XSDs from entities, vice versa? Currently, I am about to write a utility to generate an xsd, given an entity class - but I am hoping I don't have to reinvent the wheel if it already exists. I am hoping people here could educate me on any available tools to ease my XSD generation. But btw, I am very wary of anything that uses Maven, because most people (like Spring) who write the Maven scripts and pom don't have the expertise to write it in a way that would spew out messages and verbosity appropriately to make it easy for me to locate model errors.

    Read the article

  • OJB Reference Descriptor 1:0 relationship? Should I set auto-retrieve to false?

    - by godzillasdm
    Hi, I am having an issue while using Apache OJB with Spring 2 inside my web app. I'm using OJB reference-descriptor with 2 foreign key properties. I have an object A (parent) and object B (referenced object). The thing is, for an object A, there may or may not be an object B. In the case where there is no object B to go with Object A, the object B seems to be instantiated (through Spring?) anyways. However, I am unable to access object B's members. Whenever I test if Object B == null, it always returns false even though there is no matching value in the database. Since this Object is never null, I figured I can test the object's member like so: if( objectb.getDocumentNumber == null) { return false; } However, I get an exception in the jsp: javax.servlet.jsp.el.ELException: An error occurred while getting property "documentNumber" from an instance class org.sample.pojo.Objectb$$EnhancerByCGLIB$$78022a2 and this exception in the debugger when it's creating the objectB: com.sun.jdi.InvocationException occurred invoking method. I am guessing that the reference-descriptor must be a 1:1+ relationship, instead of a 1:0+ relationship. I was wondering if I should set the property 'auto-retrieve' to false, and then use the PersistenceBroker.retrieveAllReferences(Object obj); method as directed. However, this method's return value is 'void', so I am guessing that Spring somehow creates, and sets the reference class for me. (Returning me back to the same issue I'm having.) I will need a way to test whether the reference object exists first. If not, don't call this retrieveAllReferences method, but I don't see how. Am I going about this all wrong? Does reference-descriptor not allow 1:0 relations? Any work around to my problem? Your suggestions are greatly appreciated!

    Read the article

  • process csv in scala

    - by portoalet
    I am using scala 2.7.7, and wanted to parse CSV file and store the data in SQLite database. I ended up using OpenCSV java library to parse the CSV file, and using sqlitejdbc library. Using these java libraries makes my scala code looks almost identical to that of Java code (sans semicolon and with val/var) As I am dealing with java objects, I can't use scala list, map, etc, unless I do scala2java conversion or upgrade to scala 2.8 Is there a way I can simplify my code further using scala bits that I don't know? val filename = "file.csv"; val reader = new CSVReader(new FileReader(filename)) var aLine = new Array[String](10) var lastSymbol = "" while( (aLine = reader.readNext()) != null ) { if( aLine != null ) { val symbol = aLine(0) if( !symbol.equals(lastSymbol)) { try { val rs = stat.executeQuery("select name from sqlite_master where name='" + symbol + "';" ) if( !rs.next() ) { stat.executeUpdate("drop table if exists '" + symbol + "';") stat.executeUpdate("create table '" + symbol + "' (symbol,data,open,high,low,close,vol);") } } catch { case sqle : java.sql.SQLException => println(sqle) } lastSymbol = symbol } val prep = conn.prepareStatement("insert into '" + symbol + "' values (?,?,?,?,?,?,?);") prep.setString(1, aLine(0)) //symbol prep.setString(2, aLine(1)) //date prep.setString(3, aLine(2)) //open prep.setString(4, aLine(3)) //high prep.setString(5, aLine(4)) //low prep.setString(6, aLine(5)) //close prep.setString(7, aLine(6)) //vol prep.addBatch() prep.executeBatch() } } conn.close()

    Read the article

  • issue in ObservableCollection

    - by prince23
    hi, i have an lsit with these data i have a class called information.cs with these properties name,school, parent ex data name school parent kumar fes All manju fes kumar anu frank kumar anitha jss All rohit frank manju anill vijaya manju vani jss kumar soumya jss kumar madhu jss rohit shiva jss rohit vanitha jss anitha anu jss anitha now taking this as an input i wanted the output to be formated with a Hierarchical data when parent is all means it is the topmost level kumar fes All. what i need to do here is i need to create an object[0] and then check in list whether kumar exists as a parent in the list if it exista then add those items as under the object[0] as a parent i need to create one more oject under **manju fes kumar anu frank kumar** if you see this class file its shows how data is formatted. public class SampleProjectData { public static ObservableCollection GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); } return teams; }

    Read the article

  • Getting a new session key after Facebook offline_access permission

    - by Richard
    I have a mobile application that I'm using with Facebook connect. I'm having trouble getting an offline_access session key after a user has granted extended permissions. Here's the user flow: User goes to my site for the first time I send them to m.facebook.com/tos.php? and pass my api key and secret The user logs in using Facebook connect Facebook returns them to a page in my site, mysite/login-success.php with an auth_token in the query string On mysite/login-success.php I instantiate the FB api client and check to see if I already have an offline_access session key for them: $facebook = new Facebook($appapikey, $appsecret); If they haven't already provided offline_access FB gives me a temporary session key I need to get offline_access permission from the user so I forward them on to www.facebook.com/connect/prompt_permissions.php? and pass offline_access in the querystring. The user authorizes offline_access and get forwarded to mysite/permissions-success.php The problem I'm having is that after instantiating the API client on permissions-success.php the session key I have is still the temporary session key, not a new offline_access session key. The only way I've found to get the offline_access key is to delete all cookies for the user and then have them login again using Facebook connect. A fairly poor user experience. Can anyone shed some light on how to use the Facebook api to generate a new session key even if one already exists (in my case a temporary session key)?

    Read the article

  • Creating custom IP-STS for sharepoint foundation 2010 without ADFS

    - by user252229
    I plan to create very simple custom IP-STS for SharePoint foundation 2010 without ADFS server so anyone can integrate Windows Live ID to SharePoint foundation 2010 simply without ADFS, I can't use ADFS server because it could not install on Windows Web Server 2008 (Web Edition), also I found many article use LDAP provider but it does not exists in SharePoint Foundation too (it requires Sharepoint Server Edition). After too much searching I just found the following article and find all technique except one problem. 1) Creating Custom Claim Provider: blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx 2) Creating Custom STS Provider: http://blogs.msdn.com/b/chunliu/archive/2010/04/02/how-to-make-use-of-a-custom-ip-sts-with-sharepoint-2010-part-1.aspx Only one step remains: I got following error after enter username in STS site and redirect to localhost/_trust/default.aspx , ( I leave EncryptingCertificateName empty). Operation is not valid due to the current state of the object I expect to get access denied error instead of that error. 1.Is it possible anyway? 2.Can anyone help me where can I find working article to create custom IP-STS without ADFS server Any idea will help me Thanks

    Read the article

  • How to Determine The Module a Particular Exception Class is Defined In

    - by doug
    Note: i edited my Q (in the title) so that it better reflects what i actually want to know. In the original title and in the text of my Q, i referred to the source of the thrown exception; what i meant, and what i should have referred to, as pointed out in one of the high-strung but otherwise helpful response below, is the module that the exception class is defined in. This is evidenced by the fact that, again, as pointed out in one of the answers below the answer to the original Q is that the exceptions were thrown from calls to cursor.execute and cursor.next, respectively--which of course, isn't the information you need to write the try/except block. For instance (the Q has nothing specifically to do with SQLite or the PySQLite module): from pysqlite2 import dbapi2 as SQ try: cursor.execute('CREATE TABLE pname (id INTEGER PRIMARY KEY, name VARCHARS(50)') except SQ.OperationalError: print("{0}, {1}".format("table already exists", "... 'CREATE' ignored")) # cursor.execute('SELECT * FROM pname') while 1: try: print(cursor.next()) except StopIteration: break # i let both snippets error out to see the exception thrown, then coded the try/finally blocks--but that didn't tell me anything about which module the exception class is defined. In my example, there's only a single imported module, but where there are many more, i am interested to know how an experienced pythonista identifies the exception source (search-the-docs-till-i-happen-to-find-it is my current method). [And yes i am aware there's a nearly identical question on SO--but for C# rather than python, plus if you read the author's edited version, you'll see he's got a different problem in mind.]

    Read the article

  • Simulating aspects of static-typing in a duck-typed language

    - by Mike
    In my current job I'm building a suite of Perl scripts that depend heavily on objects. (using Perl's bless() on a Hash to get as close to OO as possible) Now, for lack of a better way of putting this, most programmers at my company aren't very smart. Worse, they don't like reading documentation and seem to have a problem understanding other people's code. Cowboy coding is the game here. Whenever they encounter a problem and try to fix it, they come up with a horrendous solution that actually solves nothing and usually makes it worse. This results in me, frankly, not trusting them with code written in duck typed language. As an example, I see too many problems with them not getting an explicit error for misusing objects. For instance, if type A has member foo, and they do something like, instance->goo, they aren't going to see the problem immediately. It will return a null/undefined value, and they will probably waste an hour finding the cause. Then end up changing something else because they didn't properly identify the original problem. So I'm brainstorming for a way to keep my scripting language (its rapid development is an advantage) but give an explicit error message when an an object isn't used properly. I realize that since there isn't a compile stage or static typing, the error will have to be at run time. I'm fine with this, so long as the user gets a very explicit notice saying "this object doesn't have X" As part of my solution, I don't want it to be required that they check if a method/variable exists before trying to use it. Even though my work is in Perl, I think this can be language agnostic.

    Read the article

  • T-SQL Right Joins to ALL Entries inc Selected Column

    - by Pace
    Hi Experts, I have the following Query which produces the output below; SELECT TBLUSERS.USERID, TBLUSERS.ADusername, TBLACCESSLEVELS.ACCESSLEVELID, TBLACCESSLEVELS.AccessLevelName FROM TBLACCESSLEVELS INNER JOIN TBLACCESSRIGHTS ON TBLACCESSLEVELS.ACCESSLEVELID = TBLACCESSRIGHTS.ACCESSLEVELID INNER JOIN TBLUSERS ON TBLACCESSRIGHTS.USERID = TBLUSERS.USERID The output is this; 29 administrator 1 AllUsers 29 administrator 2 JobQueue 29 administrator 3 Telephone Directory Admin 29 administrator 4 Jobqueueadmin 29 administrator 5 UserAdmin 29 administrator 6 Product System 27 alan 1 AllUsers 97 andy 1 AllUsers 26 barry 1 AllUsers 26 barry 2 JobQueue 26 barry 3 Telephone Directory Admin 26 barry 4 Jobqueueadmin 26 barry 5 UserAdmin 26 barry 6 Product System 26 barry 7 Newseditor 26 barry 8 GreetingBoard What I would like to do is modify the query so I get all Access Levels regardless of weather there is an entry for that user. What I would also like to do is some sort of exist case so that I get output like the following; 29 administrator 1 AllUsers True 29 administrator 2 JobQueue True 29 administrator 3 Telephone Directory Admin True 29 administrator 4 Jobqueueadmin True 29 administrator 5 UserAdmin True 29 administrator 6 Product System True 29 administrator 7 Newseditor False 29 administrator 8 GreetingBoard False 27 alan 1 AllUsers True 27 alan 2 JobQueue False 27 alan 3 Telephone Directory Admin False 27 alan 4 Jobqueueadmin False 27 alan 5 UserAdmin False 27 alan 6 Product System False 27 alan 7 Newseditor False 27 alan 8 GreetingBoard False 97 andy 1 AllUsers True 97 andy 2 JobQueue False 97 andy 3 Telephone Directory Admin False 97 andy 4 Jobqueueadmin False 97 andy 5 UserAdmin False 97 andy 6 Product System False 97 andy 7 Newseditor False 97 andy 8 GreetingBoard False 26 Barry 1 AllUsers True 26 Barry 2 JobQueue True 26 Barry 3 Telephone Directory Admin True 26 Barry 4 Jobqueueadmin True 26 Barry 5 UserAdmin True 26 Barry 6 Product System True 26 Barry 7 Newseditor True 26 Barry 8 GreetingBoard True ......................................... So the rules are ALWAYS show ALL Entries for ACCESSLEVELS and where EXISTS in ACCESSRIGHTS produce a true / false to show this. I hope this makes sense and hopefully you dont need the table definitions as everything I need to work with is in the original Query. I just need a way of manipulating it slightly and getting the join in the right place. Thank you. Pace

    Read the article

  • Looking for a lock-free RT-safe single-reader single-writer structure

    - by moala
    Hi, I'm looking for a lock-free design conforming to these requisites: a single writer writes into a structure and a single reader reads from this structure (this structure exists already and is safe for simultaneous read/write) but at some time, the structure needs to be changed by the writer, which then initialises, switches and writes into a new structure (of the same type but with new content) and at the next time the reader reads, it switches to this new structure (if the writer multiply switches to a new lock-free structure, the reader discards these structures, ignoring their data). The structures must be reused, i.e. no heap memory allocation/free is allowed during write/read/switch operation, for RT purposes. I have currently implemented a ringbuffer containing multiple instances of these structures; but this implementation suffers from the fact that when the writer has used all the structures present in the ringbuffer, there is no more place to change from structure... But the rest of the ringbuffer contains some data which don't have to be read by the reader but can't be re-used by the writer. As a consequence, the ringbuffer does not fit this purpose. Any idea (name or pseudo-implementation) of a lock-free design? Thanks for having considered this problem.

    Read the article

  • KeepAlive packets over a Soap request

    - by Nycto
    I've been debugging some Soap requests we are making between two servers on the same VLAN. The app on one server is written in PHP, the app on the other is written in Java. I can control and make changes to the PHP code, but I can't affect the Java server. The PHP app forms the XML using the DOMDocument objects, then sends the request using the cURL extension. When the soap request took longer than 5 minutes to complete, it would always wait until the max timeout limit and exit with a message like this: Operation timed out after 900000 milliseconds with 0 bytes received After sniffing the packets that were being sent, it turns out that the problem was caused by a 5 minute timeout in the network that was closing what it thought was a stale connection. There were two ways to fix it: bump up the timeout in iptables, or start sending KeepAlive packets over the request. To be thorough, I would like to implement both solutions. Bumping up the timeout was easy for ops to do, but sending KeepAlive packets is turning out to be difficult. The cURL library itself supports this (see the --keepalive-time flag for the CLI app), but it doesn't appear that this has been implemented in the PHP cURL library. I even checked the source to make sure it wasn't an undocumented feature. So my question is this: How the heck can I get these packets sent? I see a few clear options, but I don't like any of them: Write a wrapper that will kick off the request by shell_execing the CLI app. This is a hack that I just don't like Update the cURL extension to support this. This is a non-option according to Ops. Open the socket myself. I know just enough to be dangerous. I also haven't seen a way to do this with fsockopen, but I could be missing something. Switch to another library. What exists that supports this? Thanks for any help you can offer.

    Read the article

  • Using inotify-tools and ruby to push uploads to Cloud Files

    - by Christian
    Hi Guys, I wrote a few scripts to monitor an uploads directory for changes, then capture the file uploaded/changed and push it to cloud files using a ruby script. This all works well 95% of the time, the only exception is that occasionally, ruby fails with a 'file does not exist' exception. I am assuming that the ruby 'push' script is being called before the file is 100% in its new location, so the script is being called a little prematurely. I tried adding a little function to my script to check if the file exists, if it doesn't, sleep 5 then try again, but this seems to snowball and eventually dies. I then just added a sleep 2 to all calls, but it hasn't helped as I now get the 'file does not exist' error again. #!/bin/sh function checkExists { if [ ! -e "$1" ] then sleep 5 checkExists $1 fi } inotifywait -mr --timefmt '%d/%m/%y-%H:%M' --format '%T %w %f' -e modify,moved_to,create,delete /home/skylines/html/forums/uploads | while read date dir file; do cloudpath=${dir:20}${file} localpath=${dir}${file} #checkExists $localpath sleep 2 ruby /home/cbiggins/bin/pushToCloud.rb skylinesaustralia.com $cloudpath $localpath echo "${date} ruby /home/cbiggins/bin/pushToCloud.rb skylinesaustralia.com $cloudpath $localpath" >> /var/log/pushToCloud.log done I am looking for any suggestions to help me make this 100% stable (eventually, I'll serve the uploaded files from Cloud FIles, so I need to make sure its perfect) Thanks in advance!

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • PHP XML Validation

    - by efritz
    What's the best way to validate an XML file (or a portion of it) against multiple XSD files? For example, I have the following schema for a configuration loader: <xsd:schema xmlns="http://www.kauriproject.org/schema/configuration" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.kauriproject.org/schema/configuration" elementFormDefault="qualified"> <xsd:element name="configuration" type="configuration" /> <xsd:complexType name="configuration"> <xsd:choice maxOccurs="unbounded"> <xsd:element name="import" type="import" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="section" type="section" /> </xsd:choice> </xsd:complexType> <xsd:complexType name="section"> <xsd:sequence> <xsd:any minOccurs="0" maxOccurs="unbounded" processContents="lax" /> </xsd:sequence> <xsd:attribute name="name" type="xsd:string" use="required" /> <xsd:attribute name="type" type="xsd:string" use="required" /> </xsd:complexType> <xsd:complexType name="import" mixed="true"> <xsd:attribute name="resource" type="xsd:string" /> </xsd:complexType> </xsd:schema> As the Configuration class exists now, it lets one add a <section> tag with a define concrete parser class (much like custom configuration sections in ASP.NET). However, I'm unsure of how to validate the section being parsed. If it possible to validate just this section of code with an XSD file/string without writing it back to a file?

    Read the article

  • AS3 Working With Arbitrarily Large Files

    - by Kekoa
    I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB. _fileStream = new FileStream(); _fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError); _fileStream.addEventListener(Event.COMPLETE, loadComplete); _fileStream.openAsync(myFile, FileMode.READ); In looking at the documentation, it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large files). Is there a more suitable class to use for reading large files? I really would like something like a buffered FileStream class that only loads the bytes from the files that are going to be read next. I'm expecting that I may need to write a class that does this for me, but then I would need to read only a piece of a file at a time. I'm assuming that I can do this by setting the position and readAhead properties of the FileStream to read a chunk out of a file at a time. I would love to save some time if there is a class like this that already exists. Is there a good way to process large files in AS3, without loading entire contents into memory?

    Read the article

  • dximagetransform.matrix, absolutely position child elements not rotating in IE 8 standards mode

    - by davydka
    I've looked all over for more information on this, and would like to know why it happens. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> </head> <body> <div style="position:absolute; top:200px; left:200px; height:200px; width:200px; border:1px solid black; filter: progid:DXImageTransform.Microsoft.Matrix(sizingMethod='auto expand', M11=0.9886188373396114, M12=-0.15044199698646263, M21=0.15044199698646263, M22=0.9886188373396114);"> <div style="position:absolute; top:10px; left:10px; border:1px solid darkblue;"> I do not rotate in IE 8. </div> </div> </body> </html> The problem is that absolutely or relatively positioned elements within a div that has been rotated using MS's dximagetransform.matrix do not inherit the transformation in IE 8. IE 6 & 7 render correctly, and I can solve the IE8 problem by triggering compatibility mode, but I'd rather not do that. Does anyone have any experience with this? I'm using css3 transform on other browsers and using dximagetransform.matrix to achieve this effect in IE. EDIT: Added the opening html tag. Problem still exists. http://i45.tinypic.com/nf4gmq.png

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >