Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 777/956 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • Multiple Ruby versions on one webserver?

    - by Legion
    The Ideal Using rvm, it would be awesome to be able to have multiple Rubies on one webserver, and through some sort of server configuration, be able to assign Ruby versions to different Rails/Sinatra/etc apps on a per-project basis. I am aware, from rvm's documentation, that Passenger only works with one Ruby at a time. :( The Compromise Failing that, it would be nice to at least be able to concoct a way to be able to assign projects to a Ruby 1.8 or a Ruby 1.9 interpreter. I've read that using Nginx as a reverse proxy allows running Apache and Nginx on the same box. Would it then be possible to have Apache+Passenger using one Ruby, and Nginx+Passenger using a different one? Maybe use something other than Passenger with Nginx? Am I Barking Up the Wrong Tree? Am I missing a good solution to this issue? Am I walking into a nightmare configuration situation? Is what I want even viable, or is it necessary to run another box to run a separate Ruby version?

    Read the article

  • Filter a date property between a begin and end Dates with JDOQL

    - by Sergio del Amo
    I want to code a function to get a list of Entry objects whose date field is between a beginPeriod and endPeriod I post below a code snippet which works with a HACK. I have to substract a day from the begin period date. It seems the condition great or equal does not work. Any idea why I have this issue? public static List<Entry> getEntries(Date beginPeriod, Date endPeriod) { /* TODO * The great or equal condition does not seem to work in the filter below * Substract a day and it seems to work */ Calendar calendar = Calendar.getInstance(); calendar.set(beginPeriod.getYear(), beginPeriod.getMonth(), beginPeriod.getDate() - 1); beginPeriod = calendar.getTime(); PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(Entry.class); q.setFilter("this.date >= beginPeriodParam && this.date <= endPeriodParam"); q.declareParameters("java.util.Date beginPeriodParam, java.util.Date endPeriodParam"); List<Entry> entries = (List<Entry>) q.execute(beginPeriod,endPeriod); return entries; }

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • Connection reset when calling disconnect() using enterprisedt's ftp java framework

    - by Frederik Wordenskjold
    I'm having trouble disconnecting from a ftp-server, using the enterprisedt java ftp framework. I can simply not call disconnect() on a FileTransferClient object without getting an error. I do not do anything, besides connecting to the server, and then disconnecting: // create client log.info("Creating FTP client"); ftp = new FileTransferClient(); // set remote host log.info("Setting remote host"); ftp.setRemoteHost(host); ftp.setUserName(username); ftp.setPassword(password); // connect to the server log.info("Connecting to server " + host); ftp.connect(); log.info("Connected and logged in to server " + host); // Shut down client log.info("Quitting client"); ftp.disconnect(); log.info("Example complete"); When running this, the log reads: INFO [test] 28 maj 2010 16:57:20.216 : Creating FTP client INFO [test] 28 maj 2010 16:57:20.263 : Setting remote host INFO [test] 28 maj 2010 16:57:20.263 : Connecting to server x INFO [test] 28 maj 2010 16:57:20.979 : Connected and logged in to server x INFO [test] 28 maj 2010 16:57:20.979 : Quitting client ERROR [FTPControlSocket] 28 maj 2010 16:57:21.026 : Read failed ('' read so far) And the stacktrace: com.enterprisedt.net.ftp.ControlChannelIOException: Connection reset at com.enterprisedt.net.ftp.FTPControlSocket.readLine(FTPControlSocket.java:1029) at com.enterprisedt.net.ftp.FTPControlSocket.readReply(FTPControlSocket.java:1089) at com.enterprisedt.net.ftp.FTPControlSocket.sendCommand(FTPControlSocket.java:988) at com.enterprisedt.net.ftp.FTPClient.quit(FTPClient.java:4044) at com.enterprisedt.net.ftp.FileTransferClient.disconnect(FileTransferClient.java:1034) at test.main(test.java:46) It should be noted, that I without problems can connect, and do stuff with the server, like getting a list of files in the current working directory. But I cant, for some reason, disconnect! I've tried using both active and passive mode. The above example is by the way copy/pasted from their own example. I cannot fint ANYTHING related to this by doing a Google-search, so I was hoping you have any suggestions, or experience with this issue.

    Read the article

  • Amazon API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • How can I setup ANT with Subversion and ColdFusion Builder (eclipse) to check out a local build to w

    - by Smooth Operator
    I am not sure if there's an answer for this already -- couldn't find one for this (hopefully common) setup: I recently converted one of my ColdFusion projects to deploy via ANT. I have a local ant script that instructs a remote server to check out the code, and run the application's specific build file, remotely on the server. I have a few endpoints: Live - production (on the production server) Staging - on the production server, different datasource, etc. dev - on the local box. What I have run into it seems is a simple and common problem. I now need ANT to create any build, even locally. Fine, created a local endpoint and it configures for my box. Issue? How do I get it to show up as a project (automatically if possible) in Eclipse/ColdFusion builder. What I envision is instead of checking out a branch via the subversion plugin in CFBuilder/Eclipse, I now use ANT to do that for me. Since I use ColdFusion Builder (Eclipse + Adobe's plugin), I have all of eclipse's tools and plugins available to solve the problem of : how can I best call ANT from within Eclipse/ColdFusion Builder, to setup the local build as a project that I can develop and work on? I think when I check the code back in from the local box, I'd have to be sure not to check in any files with local config paths, etc. I hope this is a detailed and clear enough explanation, if not, please ask. Thanks in advance!

    Read the article

  • How to mimic built-in .NET serialization idioms?

    - by Matt Enright
    I have a library (written in C#) for which I need to read/write representations of my objects to disk (or to any Stream) in a particular binary format (to ensure compatibility with C/Java library implementations). The format requires a fair amount of bit-packing and some DEFLATE'd bytestreams. I would like my library, however, to be as idiomatic .NET as possible, however, and so would like to provide an API as close as possible to the normal binary serialization process. I'm aware of the ability to implement the IFormatter interface, but being that I really am unable to reuse any part of the built-in serialization stack, is it worth doing this, or will it just bring unnecessary overhead. In other words: Implement IFormatter and co. OR Just provide "Serialize"/"Deserialize" methods that act on a Stream? A good point brought up below about needing the serialization semantics for any case involving Remoting. In a case where using MarshalByRef objects is feasible, I'm pretty sure that this won't be an issue, so leaving that aside are there any benefits or drawbacks to using the ISerializable/IFormatter versus a custom stack (or, is my understanding remoting incorrectly)?

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Grouped UITableView Footer Sometimes Hidden On Quick Scroll

    - by jdandrea
    OK, this one is a puzzler. There is one similar post but it's not similar enough to count, so I'm posting this one. :) I've got a grouped UITableView with a header and footer. The footer includes two UIButton views, side-by-side. Nothing major. Now … there is a toggle button in a UIToolbar at the bottom for more/less info in this table view. So I build my index paths to delete/insert with fade row animation, all the usual ingredients, sandwiched between beginUpdates and endUpdates calls on the UITableView … and this works fine! In also happens that my footer can sometimes be pushed off past the bottom of the display. Here's where it gets weird. If I drag my finger up the display, scrolling the view upward, I should see that footer eventually, right? Well … most of the time I do. BUT, if I flick my finger up, for a faster scroll, the footer is missing. Even if you try to tap in that area - no response. However, if I scroll back down again, just to hide that footer (or rather hide the area where the footer would normally be), and then scroll back up, it's there once again! This only happens when inserting rows. If I delete rows, the footer stays put … unless of course it was already hidden and I didn't perform the aforementioned incantation to get it back. :) I am trying to trace through this, but to no avail. I suppose tracing through scroll operations is a bit of a crazy proposition! Perhaps some creative logging … suggestions, anyone? Or is this a known issue in 3.1 where row insert/deletes are concerned? (I don't recall seeing it until 3.1.)

    Read the article

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

  • .NET RichTextBox: unable to change Rtf property.

    - by Dave
    Hey guys, Perhaps I'm missing something real simple here, but I've been struggling to change the RTF property of my RichTextBox in order to apply some color coding to my text. Probably the most straight-forward example of the problem I'm having is setting the Rtf property to include a color table in its header. The default RTF string returned by the Rtf property: {\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\fnil\fcharset0 Microsoft Sans Serif;}}\viewkind4\uc1\pard\f0\fs17\par} And the new RTF string I'd like to set with my color table: {\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\fnil\fcharset0 Microsoft Sans Serif;}{\colortbl;\red128\green0\blue0;\red0\green128\blue0;\red0\green0\blue255;}}\viewkind4\uc1\pard\f0\fs17\par} And I set this using: RichTextBox richTextBox = new RichTextBox(); richTextBox.Rtf = rtfStr; // My new RTF string, as seen above. However, via debugger, it can be observed the that Rtf property stubbornly refuses to change; no exceptions are thrown, it just refuses to change. Same issue happens when I string.Replace() words to include RTF color tags around them. I've also tried turning off any ReadOnly properties on the text box. Any suggestions would be most helpful, thanks! Dave

    Read the article

  • Outlook 2010 Retrieving and restricting appointments programmatically causing recurrences to be incl

    - by Mike Dearing
    I wrote a winforms app that uses Microsoft.Office.Interop.Outlook to retrieve and restrict appointments based upon the date range entered by a user. This worked fine with Outlook 2007 installed, however now that some users have updated to Outlook 2010 the appointment retrieval is pulling back incorrect appointments along with the correct ones falling within the specified date range. The additional incorrect appointments being retrieved always appear to be recurring appointments. I was wondering if this is a known bug and if so what exactly is happening that is causing these additional recurring appointments to come in? I'd rather not have to throw in a workaround where I step through the items after they have been restricted and remove the extra ones, when this functionality works fine with 2007. Note: I've not recompiled or updated any code when experiencing this issue, just running the old program. This is the spot in my code where appointments are being restricted. This is similar to the way advised in the following msdn link: http://msdn.microsoft.com/en-us/library/bb611267.aspx Microsoft.Office.Interop.Outlook.Items outlookItems = outlookMapiFolder.Items.Restrict( "[Start] >= '" + outlookImport.startDay.ToString("g") + "' AND [Start] <= '" + outlookImport.endDay.ToString("g") + "'"); outlookItems.Sort("[Start]", Type.Missing); outlookItems.IncludeRecurrences = true;

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • CSS: Margin-top when parent's got no border

    - by Manny
    Hi, As you can see in this picture, I've got an orange div inside a green div with no top border. The orange div has a 30px top margin, but it's also pushing the green div down. Of course, adding a top border will fix the issue, but I need the green div to be top borderless. What could I do? html: <div class="header">Top</div> <div class="body"> <div class="container">Box</div> </div> <div class="foot">Bottom</div> css: .body { border: 1px solid black; border-top: none; border-bottom: none; width: 120px; height: 112px; background-color: lightgreen; } .body .container { background-color: orange; height: 50px; width: 50%; margin-top: 30px; } Thanks

    Read the article

  • Problem sending email with Codeigniter - Headers sent in the message body

    - by Brian
    Having a strange issue with the email class in codeigniter. When I send email directly to my gmail account email address, it works fine. However if I send email to a different email address and use POP3 to import that email address into gmail, then for some reason all the headers are included in the message. Here's the code for sending the email: $this->email->clear(); $config['mailtype'] = "html"; $this->email->initialize($config); $this->email->set_newline("\r\n"); $this->email->from('[email protected]', 'Website'); $this->email->to('[email protected]'); $this->email->message($message); Here's what arrives in my inbox when the email is sent to an account which is imported into gmail via POP3: Date: Fri, 7 Jan 2011 15:07:04 +0000 From: "Website" <[email protected]> Reply-To: "[email protected]" <[email protected]> X-Sender: [email protected] X-Mailer: CodeIgniter X-Priority: 3 (Normal) Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="B_ALT_4d272c1835c46" This is a multi-part message in MIME format. Your email application may not support this format. --B_ALT_4d272c1835c46 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit this is the email message content --B_ALT_4d272c1835c46 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <body> <p>this is the email message content </p> </body> </html> --B_ALT_4d272c1835c46--

    Read the article

  • Best approach, Dynamic OpenXML in T-SQL

    - by Martin Ongtangco
    hello, i'm storing XML values to an entry in my database. Originally, i extract the xml datatype to my business logic then fill the XML data into a DataSet. I want to improve this process by loading the XML right into the T-SQL. Instead of getting the xml as string then converting it on the BL. My issue is this: each xml entry is dynamic, meaning it can be any column created by the user. I tried using this approach, but it's giving me an error: CREATE PROCEDURE spXMLtoDataSet @id uniqueidentifier, @columns varchar(max) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @name varchar(300); DECLARE @i int; DECLARE @xmlData xml; (SELECT @xmlData = data, @name = name FROM XmlTABLES WHERE (tableID = ISNULL(@id, tableID))); EXEC sp_xml_preparedocument @i OUTPUT, @xmlData DECLARE @tag varchar(1000); SET @tag = '/NewDataSet/' + @name; DECLARE @statement varchar(max) SET @statement = 'SELECT * FROM OpenXML(@i, @tag, 2) WITH (' + @columns + ')'; EXEC (@statement); EXEC sp_xml_removedocument @i END where i pass a dynamically written @columns. For example: spXMLtoDataSet 'bda32dd7-0439-4f97-bc96-50cdacbb1518', 'ID int, TypeOfAccident int, Major bit, Number_of_Persons int, Notes varchar(max)' but it kept on throwing me this exception: Msg 137, Level 15, State 2, Line 1 Must declare the scalar variable "@i". Msg 319, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.

    Read the article

  • Update Related Entity Of Detached Entity

    - by Hemslingo
    I'm having an issue updating an entity with multiple related entities. I've got a very simple model which consists of an article entity and a list of categories the article can be related to. You can choose from a check box list which of these categories are associated to it...which works fine. The problem crops up when I actually come to update an existing entity using the dbContext. As I am updating this entity, I have already detached it from the context ready to re-attach it later so the update can execute properly. I can see that after I posting the model, the category(s) are being added to the article entity just fine and it looks like it updates in the repository with no errors occurring. When I look in the database the article has updated as normal but the category(s) have not. Here is my (simplified) update code... public virtual bool Attach(T entity) { _dbContext.Entry(entity).State = EntityState.Modified; _dbSet.Attach(entity); return this.Commit(); } Any help will be much appreciated.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How to determine the data type of a CvMat

    - by Chris
    When using the CvMat type, the type of data is crucial to keeping your program running. For example, depending on whether your data is type float or unsigned char, you would choose one of these two commands: cvmGet(mat, row, col); cvGetReal2D(mat, row, col); Is there a universal approach to this? If the wrong data type matrix is passed to these calls, they crash at runtime. This is becoming an issue, since a function I have defined is getting passed several different types of matrices. How do you determine the data type of a matrix so you can always access its data? I tried using the "type()" function as such. CvMat* tmp_ptr = cvCreateMat(t_height,t_width,CV_8U); std::cout << "type = " << tmp_ptr->type() << std::endl; This does not compile, saying "term does not evaluate to a function taking 0 arguments". If I remove the brackets after the word type, I get a type of 1111638032 EDIT minimal application that reproduces this... int main( int argc, char** argv ) { CvMat *tmp2 = cvCreateMat(10,10, CV_32FC1); std::cout << "tmp2 type = " << tmp2->type << " and CV_32FC1 = " << CV_32FC1 << " and " << (tmp2->type == CV_32FC1) << std::endl; } Output: tmp2 type = 1111638021 and CV_32FC1 = 5 and 0

    Read the article

  • NTLM Authentication fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. I have sniffed the traffic, and noticed the following: Behind Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx Without Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx 6 200 HTTP host.domain.com /_vti_bin/Webs.asmx When running the code from a network without a proxy server, I successfully Authentication with the server, but when I'm behind the proxy server, the traffic is cut-off at the 5th message, and thus don't succeed. I know from the Java docs that On Microsoft Windows platforms, NTLM authentication attempts to acquire the user credentials from the system without prompting the user's authenticator object. If these credentials are not accepted by the server then the user's authenticator will be called. Given that my Authentication code is called only ones, and only as the 5th attempt, it appears as if the connection is dropped when behind the proxy server before my Authentication object is used. Is there any way I can control the behavior of Authentication module, to not have it use the system credentials? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text Br Jan

    Read the article

  • Firefox and IE - Firefox handling of <span> inside tables (html and css)

    - by jasrpg
    Hello, I am having difficulties getting a span tag to work properly inside a table. It appears as if the entire span tag is being ignored that is defined anywhere in between table tags in firefox, but in IE this shows up correctly. Maybe I am missing something, but I have created a small example css and html file that displays differently in Firefox and IE. Hopefully someone can tell me why it is behaving this way or how I can rearrange the html to resolve the issue in firefox. ---main.css--- .class1 A:link {color:#960033; text-decoration: none} .class1 A:visited {color:#960033; text-decoration: none} .class1 A:hover {color:#0000FF; font-weight: bold; text-decoration: none} .class1 A:active {color:#0000FF; font-weight: bold; text-decoration: none} .class2 A:link {color:#960033; text-decoration: none} .class2 A:visited {color:#960033; text-decoration: none} .class2 A:hover {color:#0000FF; text-decoration: none} .class2 A:active {color:#0000FF; text-decoration: none} ---test.htm--- <HTML> <HEAD> <title>Title Page</title> <style type="text/css">@import url("/css/main.css");</style> </HEAD> <span class="class1"> <BODY> <table><tr><td> ---Insert Hyperlink---<br> </td></tr> </span><span class="class2"> <tr><td> ---Insert Hyperlink---<br> </td></tr></table> </span> </BODY> </HTML>

    Read the article

  • NHibernate: Mapping multiple classes from a single table row

    - by Michael Kurtz
    I couldn't find an answer to this specific question. I am trying to keep my domain model object-oriented and re-use objects where possible. I am having an issue determining how to provide a mapping to multiple classes from a single row. Let me explain with an example: I have a single table, call it Customer. A customer has several attributes; but, for brevity, assume it has Id, Name, Address, City, State, ZipCode. I would like to create a Customer and Address class that look like this: public class Customer { public virtual long Id {get;set;} public virtual string Name {get;set;} public virtual Address Address {get;set;} } public class Address { public virtual string Address {get;set;} public virtual string City {get;set;} public virtual string State {get;set;} public virtual string ZipCode {get;set;} } What I am having trouble with is determining what the mapping would be for the Address class within the Customer class. There is no Address table and there isn't a "set" of addresses associated with a Customer. I just want a more object-oriented view of the Customer table in code. There are several other tables that have address information in them and it would be nice to have a reusable Address class to deal with them. Addresses are not shared so breaking all addresses into a separate table with foreign keys seems to be overkill and, actually, more painful since I would need foreign keys to multiple tables. Can someone enlighten me on this type of mapping? Please provide an example if you can. Thanks for any insights! -Mike

    Read the article

  • Loading a UITableView From A Nib

    - by Garry
    Hi, I keep getting a crash when loading a UITableView. I am trying to use a cell defined in a nib file. I have an IBOutlet defined in the view controller header file: UITableViewCell *jobCell; @property (nonatomic, assign) IBOutlet UITableViewCell *jobCell; This is synthesised in the implementation file. I have a UITableViewCell created in IB and set it's identifier to JobCell. Here is the cellForRowAtIndexPath method: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellIdentifier = @"JobCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier]; if (cell == nil) { [[NSBundle mainBundle] loadNibNamed:@"JobsRootViewController" owner:self options:nil]; cell = jobCell; self.jobCell = nil; } // Get this job Job *job = [fetchedResultsController objectAtIndexPath:indexPath]; // Job title UILabel *jobTitle; jobTitle = (UILabel *)[cell viewWithTag:tagJobTitle]; jobTitle.text = job.title; // Job due date UILabel *dueDate; dueDate = (UILabel *)[cell viewWithTag:tagJobDueDate]; dueDate.text = [self.dateFormatter stringFromDate:job.dueDate]; // Notes icon UIImageView *notesImageView; notesImageView = (UIImageView *)[cell viewWithTag:tagNotesImageView]; if ([job.notes length] > 0) { // This job has a note attached to it - show the notes icon notesImageView.hidden = NO; } else { // Hide the notes icon notesImageView.hidden = YES; } // Job completed button // Return the cell return cell; } When I run the app - I get a hard crash and the console reports the following: objc[1291]: FREED(id): message style sent to freed object=0x4046400 I have hooked up all the outlets in IB correctly. What is the issue? Thanks,

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • How to Persist URL parameters when CakePHP form validation fails

    - by am2605
    Hi, I'm new to cakephp and trying to write a simple app with it, however I'm stuck with some form validation issues. I have a model named "Person" which hasMany "PersonSkill" objects. To add a "PersonSkill" to a person, I have set it up to call a url like this: http://localhost/myapp/person_skills/add/person_id:3 I have been passing through the person_id because I want to display the name of the person we are adding the skills for. My issue is if the validation fails, the person_id parameter is not persisted to the next request, so the person's name is not displayed. The add method on the controller looks like this: function add() { if (!empty($this->data)) { if ($this->PersonSkill->save($this->data)) { $this->Session->setFlash('Your person has been saved.'); $this->redirect(array('action' => 'view', 'id' => $this->PersonSkill->id)); } } else { $this->Person->id = $this->params['named']['person_id']; $this->set('person', $this->Person->read()); } } In my person_skill add.ctp I set a hidden field which holds the person_id, eg: echo $form->input('person_id', array('type'=>'hidden','value'=>$person['Person']['id'])); Is there a way to persist the person_id url parameter when form validation fails, or is there a better way to do this that I'm missing completely? Any advice would be greatly appreciated.

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >