Search Results

Search found 2287 results on 92 pages for 'reads'.

Page 70/92 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • State Animation on ListBox ItemTemplate

    - by Peanut
    I have a listbox which reads from Observable collection, and is ItemTemplate'ed: <DataTemplate x:Key="DataTemplate1"> <Grid x:Name="grid" Height="47.333" Width="577" Opacity="0.495"> <Image HorizontalAlignment="Left" Margin="10.668,8,0,8" Width="34" Source="{Binding ImageLocation}"/> <TextBlock Margin="56,8,172.334,8" TextWrapping="Wrap" Text="{Binding ApplicationName}" FontSize="21.333"/> <Grid x:Name="grid1" HorizontalAlignment="Right" Margin="0,10.003,-0.009,11.33" Width="26" Opacity="0" RenderTransformOrigin="0.5,0.5"> <Image HorizontalAlignment="Stretch" Margin="0" Source="image/downloads.png" Stretch="Fill" MouseDown="Image_MouseDown" /> </Grid> </Grid> </DataTemplate> <ListBox x:Name="searchlist" Margin="8" ItemTemplate="{DynamicResource DataTemplate1}" ItemsSource="{Binding SearchResults}" SelectionChanged="searchlist_SelectionChanged" ItemContainerStyle="{DynamicResource ListBoxItemStyle1}" /> In general, my question is "What is the easiest way to do Animation on Particular Items in this listbox As they are selected? Basically the image inside the "grid1" will be setting its opacity to 1, slowly. I would prefer to use states, but I do not know of any way to just tell blend and xaml to "When a selected item is changed, change the image opacity to 1 over a period of .3 seconds". Infact, I have been doing this in the .cs file using the VisualStateManager. Also, there is another issue. When the selected index is changed, we goto the CS file and look at SelectedItem. SelectedItem returns an instance of the Object in which it was bound to (The object inside the observable collection), and NOT an instance of the DataTemplate/ListItem etc. So how am I able to pull the correct image out of this list? State animation with VisualStateManager I can handle fine if its just normal things, but when it comes to generated listboxes' items, I'm lost. Thanks

    Read the article

  • sharing message object between web applications

    - by jezhilvalan
    I need to share java mail message objects between two web applications(A and B). WebApplication A obtains the message and write it to the outputStream for(int i=0;i<messagesArr.length;i++){ uid = pop3FolderObj.getUID(messagesArr[i]); //storing messages with uid names inorder to maintain uniqueness File f = new File("F:/PersistedMessagesFolder" + uid); FileOutputStream fos = new FileOutputStream(f); messagesArr[i].writeTo(fos); fos.flush(); fos.close(); } Is FileOutputStream the best output stream for persisting message objects? Is it possible to use ObjectOutputStream for message object persistence? WebApplication B reads the message object via InputStream FileInputStream fis = new FileInputStream("F:/MessagesPersistedFolder"+uid); MimeMessage mm = new MimeMessage(sessionObj,fis); What if the mail message object which is already written via WebApplication A is not a MimeMessage? How can I read non-mime messages using input stream? MimeMessage constructor mandates sessionObj as the first parameter? How can I obtain this sessionObj in WebApplicationB? Do I have to again establish store connection with the same emailid,emailpassword,popserver and port(already used in WebApplication A) with the email server inorder to obtain this session object? Even if obtained, will this session object remains the same as that of the session object which is priorly obtained in WebApplicationA? Since I am using uids to name Message objects (inorder to maintain uniqueness of file names) how can I share these uids between WebApplication A and WebApplication B? WebApplication B needs the uid inorder to access the specific file which is present in "F:/MessagesPersistedFolder" Please help me in resolving the aforeseen issues.

    Read the article

  • php sessions in database only writing part of information to the table...

    - by Ronedog
    I'm having difficulty figuring out what's going on here, hoping some one can help me out. I have been using php, mysql storing my session information in the database. The app is only running on localhost, vista. In the php.ini file I commented out the "session.save_handler = files" line and am using a php class to handle the session writes/reads, etc. My login process is this: Submit login credentials via login.php. login.php calls loginprocess.php. loginprocess.php verifies user, and if valid starts a new session and adds data to the session vars, then it redirects to index.php. Here's the problem. the loginprocess.php page has a bunch of session vars that get set like $_SESSION['account_id'] = $account_id; etc. but when I go to index.php and do a var_dump($_SESSION) it just says "array() empty". However, if I do a var_dump($_SESSION) in loginprocess.php, just before the redirection line header("Location: ../index.php"); then it shows all the data in the session variable. If I look in the database where the session information is stored, there is data in the session_id field, created_ts field, and expires field, but the session_data field has nothing inside of it and in the past this is the field where all my session data was stored. How could I be able to var_dump the session in loginprocess.php, but the data not exist in the db table, is it using some kind of caching? I cleared my cookies, etc...but no change. Why is the session_id, being written to the table, but the actual session data is not? Any ideas are appreciated. Thanks.

    Read the article

  • why am I stuck on "Initiating update" when deploying to google

    - by michelle
    I've have not had any trouble deploying through eclipse until now. I'm guessing it might have to do with all the stuff I've added today a folder of .pdf and .tex files (in war/web-inf directory) a bit of JDO stuff and a servlet that reads the files in the directory and indexes them into the JDO Is there any way to find out what exactly is the problem? I currently get stuck at "Initiating update" and the stack trace say "ConnectionReset" Any helkp of imput will be appreciated, I really need to deploy this today, thanks! here's the deploy trace: Unable to update: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:315) at com.google.appengine.tools.admin.ServerConnection.authenticate(ServerConnection.java:219) at com.google.appengine.tools.admin.ServerConnection.send(ServerConnection.java:145) at com.google.appengine.tools.admin.ServerConnection.post(ServerConnection.java:81) at com.google.appengine.tools.admin.AppVersionUpload.send(AppVersionUpload.java:427) at com.google.appengine.tools.admin.AppVersionUpload.beginTransaction(AppVersionUpload.java:241) at com.google.appengine.tools.admin.AppVersionUpload.doUpload(AppVersionUpload.java:98) at com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:56) at com.google.appengine.eclipse.core.proxy.AppEngineBridgeImpl.deploy(AppEngineBridgeImpl.java:271) at com.google.appengine.eclipse.core.deploy.DeployProjectJob.runInWorkspace(DeployProjectJob.java:148) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFieldKey(Unknown Source) at com.google.appengine.tools.util.ClientCookieManager.readCookies(ClientCookieManager.java:123) at com.google.appengine.tools.admin.ServerConnection.connect(ServerConnection.java:340) at com.google.appengine.tools.admin.ServerConnection.getAuthCookie(ServerConnection.java:314) ... 11 more

    Read the article

  • ASP.NET error on Bitmap.Save "Exception (0x80004005): A generic error occurred in GDI+."

    - by Batu
    Hi, I have a function which first reads an image from disk, resizes it and then saves to another directory. when i use the Bitmap.Save(directory + theimagename) it returns the error as i stated in the question title. i checked the directory is right, and the given image name doesn't exist in that dir. what is weird, is that the same code works great on the local machine. but when i upload it to my shared server. it just doesn't work. the code is below. bmpOut = new Bitmap(Size, Size); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle(Brushes.White, 0, 0, Size, Size); int topBottomPadding = 0; int leftRightPadding = 0; if (Size > lnNewWidth + 1) leftRightPadding = Convert.ToInt32((Size - lnNewWidth) / 2); else if (Size > lnNewHeight + 1) topBottomPadding = Convert.ToInt32((Size - lnNewHeight) / 2); g.DrawImage(loBMP, leftRightPadding, topBottomPadding, lnNewWidth, lnNewHeight); Bitmap bmp = new Bitmap(bmpOut); if (bmp != null) bmp.Save(ResizedOutput); bmp.Dispose(); bmpOut.Dispose(); g.Dispose(); loBMP.Dispose(); stack trace: [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(String filename, ImageCodecInfo encoder, EncoderParameters encoderParams) +377630 System.Drawing.Image.Save(String filename, ImageFormat format) +69 System.Drawing.Image.Save(String filename) +25 Utilities.ResizeImage(String fileName, String mode) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Utilities.cs:181 Link.ToProductImage(String fileName) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Link.cs:79 Product.PopulateControls(ProductDetails pd) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:37 Product.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:20

    Read the article

  • Does anyone knows the algorithm how Facebook detects the images when adding a link

    - by Edelcom
    When you add a link to your Facebook page, after some processing, Facebook presents you a next/prev button to choose an image linked to the url your are inserting. Obviously, Facebook reads the html-page and displays the images found on the url you insert. Does anyone knows what algorithm Facebook uses to decide what images to show ? If I insert a link to : http://www.staplijst.be/lachende-wandelaars-aalter-aktivia-003.asp, only 11 images are detected. The one I want, the one at the top right corner, is not included in the list. If I insert a link to http://www.staplijst.be/stichting-kennedymars-rijsbergen-zundert-nederland-knblo-nl-81996.asp, 19 images are displayed (including the one I want (the one at the right top corner of the text area). Both pages are build using asp code but are functionally the same. I thought that it has something to do with the image size, but can't find any deciding factor there. I will investigate some furhter, because if I know what Facebook is looking for, I can make sure that the correct images are included on the page (since they are dynamic pages build with classic asp). But if anyone has any idea ? Help would be appreciated.

    Read the article

  • Getting XML Numbered Entities with PHP 5 DOM

    - by user343607
    Hello guys, I am new here and got a question that is tricking me all day long. I've made a PHP script, that reads a website source code through cURL, then works with DOMDocument class in order to generate a sitemap file. It is working like a charm in almost every aspect. The problem is with special characters. For compatibility reasons, sitemap files needs to have all special chars encoded as numbered entities. And I am not achieving that. For example, one of my entries - automatically read from site URLs, and wrote to sitemap file - is: http://www.somesite.com/serviços/redesign/ On the source code it should looks like: http://www.somesite.com/servi*ç*os/redesign/ Just this. But unfortunately, I am really not figuring it out how to do it. Source code file, server headers, etc... everything is encoded as UTF-8. I'm using DOMDocument and related extensions to build the XML. (Basically, DOMDocument, $obj-createElement, $obj-appendChild). htmlentities gives ç instead of ç str_replace does not work. It makes the character just vanish in the output. I was using $obj-createElement("loc", $url); on my code, and just now I read in PHP manual that I should use $document-createTextNode($page), in order to have entities encoding support. Well, it is not working either. Any idea on how to get unstuck of this? Thanks.

    Read the article

  • How Does One Make Scala Control Abstraction in Repeat Until?

    - by peter_pilgrim
    Hi I am Peter Pilgrim. I watched Martin Odersky create a control abstraction in Scala. However I can not yet seem to repeat it inside IntelliJ IDEA 9. Is it the IDE? package demo class Control { def repeatLoop ( body: = Unit ) = new Until( body ) class Until( body: = Unit ) { def until( cond: = Boolean ) { body; val value: Boolean = cond; println("value="+value) if ( value ) repeatLoop(body).until(cond) // if (cond) until(cond) } } def doTest2(): Unit = { var y: Int = 1 println("testing ... repeatUntil() control structure") repeatLoop { println("found y="+y) y = y + 1 } { until ( y < 10 ) } } } The error message reads: Information:Compilation completed with 1 error and 0 warnings Information:1 error Information:0 warnings C:\Users\Peter\IdeaProjects\HelloWord\src\demo\Control.scala Error:Error:line (57)error: Control.this.repeatLoop({ scala.this.Predef.println("found y=".+(y)); y = y.+(1) }) of type Control.this.Until does not take parameters repeatLoop { In the curried function the body can be thought to return an expression (the value of y+1) however the declaration body parameter of repeatUntil clearly says this can be ignored or not? What does the error mean?

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Writing a VM - well formed bytecode?

    - by David Titarenco
    Hi, I'm writing a virtual machine in C just for fun. Lame, I know, but luckily I'm on SO so hopefully no one will make fun :) I wrote a really quick'n'dirty VM that reads lines of (my own) ASM and does stuff. Right now, I only have 3 instructions: add, jmp, end. All is well and it's actually pretty cool being able to feed lines (doing it something like write_line(&prog[1], "jmp", regA, regB, 0); and then running the program: while (machine.code_pointer <= BOUNDS && DONE != true) { run_line(&prog[machine.cp]); } I'm using an opcode lookup table (which may not be efficient but it's elegant) in C and everything seems to be working OK. My question is more of a "best practices" question but I do think there's a correct answer to it. I'm making the VM able to read binary files (storing bytes in unsigned char[]) and execute bytecode. My question is: is it the VM's job to make sure the bytecode is well formed or is it just the compiler's job to make sure the binary file it spits out is well formed? I only ask this because what would happen if someone would edit a binary file and screw stuff up (delete arbitrary parts of it, etc). Clearly, the program would be buggy and probably not functional. Is this even the VM's problem? I'm sure that people much smarter than me have figured out solutions to these problems, I'm just curious what they are!

    Read the article

  • PHPExcel reading columns

    - by user1270150
    I am new to using the PHPExcel and am struggling to implement a basic reader that can be used to read only specified columns, with all rows from a spreadsheet into an array, which I would like to present in a webpage. After reading the supplied documentation and examples, I am struggling to wrap my head round the implementation of a number of examples, so any help would be much appreciated. I am using the following code to get the contents of the default worksheet into an array: for ($row = 2; $row <= $highestRow; $row++) { $dataRow = $objWorksheet->rangeToArray('A'.$row.':'.$highestColumn.$row,null, true, true, true); if ((isset($dataRow[$row]['A'])) && ($dataRow[$row]['A'] > '')) { ++$r; foreach($headingsArray as $columnKey => $columnHeading) { $columnHeading = rtrim($columnHeading); $columnHeading = preg_replace('/\s+/', ' ',$columnHeading); $columnHeading = preg_replace('/\ /', '_',$columnHeading); $columnHeading = strtolower($columnHeading); $namedDataArray[$r][$columnHeading] = $dataRow[$row][$columnKey]; } } } Above code reads through all the columns and rows and builds and array, but I would like to be able to add a configuration array that will declare columns that should be read...sure this is possible, so any help would be much appreciated. Thanks

    Read the article

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • Reading same file from multiple threads in C#

    - by Gustavo Rubio
    Hi. I was googling for some advise about this and I found some links. The most obvious was this one but in the end what im wondering is how well my code is implemented. I have basically two classes. One is the Converter and the other is ConverterThread I create an instance of this Converter class that has a property ThreadNumber that tells me how many threads should be run at the same time (this is read from user) since this application will be used on multi-cpu systems (physically, like 8 cpu) so it is suppossed that this will speed up the import The Converter instance reads a file that can range from 100mb to 800mb and each line of this file is a tab-delimitted value record that is imported to another destination like a database. The ConverterThread class simply runs inside the thread (new Thread(ConverterThread.StartThread)) and has event notification so when its work is done it can notify the Converter class and then I can sum up the progress for all these threads and notify the user (in the GUI for example) about how many of these records have been imported and how many bytes have been read. It seems, however that I'm having some trouble because I get random errors about the file not being able to be read or that the sum of the progress (percentage) went above 100% which is not possible and I think that happens because threads are not being well managed and probably the information returned by the event is malformed (since it "travels" from one thread to another) Do you have any advise on better practices of implementation of threads so I can accomplish this? Thanks in advance.

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • My jQuery and PHP give different results on the same thing?

    - by Stefan
    Hey all, Annoying brain numbing problem. I have two functions to check the length of a string (primarily, the js one truncates as well) heres the one in Javascript: $('textarea#itemdescription').keyup(function() { var charLength = $(this).val().length; // Displays count $('span#charCount').css({'color':'#666'}); $('span#charCount').html(255 - charLength); if($(this).val().length >= 240){ $('span#charCount').css({'color':'#FF0000'}); } // Alerts when 250 characters is reached if($(this).val().length >= 255){ $('span#charCount').css({'color':'#FF0000'}); $('span#charCount').html('<strong>0</strong>'); var text = $('textarea#itemdescription').val().substring(0,255) $('textarea#itemdescription').val(text); } }); And here is my PHP to double check: if(strlen($_POST["description"])>255){ echo "Description must be less than ".strlen($_POST["description"])." characters"; exit(); } I'm using jQuery Ajax to post the values from the textarea. However my php validation says the strlen() is longer than my js is essentially saying. So for example if i type a solid string and it says 0 or 3 chars left till 255. I then click save and the php gives me the length as being 261. Any ideas? Is it to do with special characters, bit sizes that js reads differently or misses out? Or is it to do with something else? Maybe its ill today!... :P Thanks, Stefan

    Read the article

  • Need help with strange Class#getResource() issue

    - by Andreas_D
    I have some legacy code that reads a configuration file from an existing jar, like: URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = url.openStream(); Obviously it worked before but when I execute this code, the URL is valid but I get an IOException on the third line, saying it can't find the file. The url is something like "file:jar:c:/path/to/jar/somejar.jar!configuration.properties" so it doesn't look like a classpath issue - java knows pretty well where the file can be found.. The above code is part of an ant task and it fails while the task is executed. Strange enough - I copied the code and the jar file into a separate class and it works as expected, the properties file is readable. At some point I changed the code of the ant task to URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = SomeClass.class.getResourceAsStream("/configuration.properties"); and now it works - just until it crashes in another class where a similiar access pattern is implemented.. Why could it have worked before, why does it fail now? The only difference I see at the moment is, that the old build was done with java 1.4 while I'm trying it with Java 6 now. Workaround Today I installed Java 1.4.2_19 on the build server and made ant to use it. To my totally frustrating surprise: The problem is gone. It looks to me, that java 1.4.2 can handle URLs of this type while Java 1.6 can't (at least in my context/environment). I'm still hoping for an explanation although I'm facing the work to rewrite parts of the code to use Class#getRessourceAsStream which behaved much more stable...

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • Read specific line from text file, according to Checked Listbox selection number.

    - by Manolis
    Heya, i want to create an application which will read a specific line from a text file and show it in a textbox. The line will be chosen according to the number of the listbox selection i will make. Here's the code: Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim i As Integer For i = 0 To Me.CheckedListBox1.CheckedIndices.Count - 1 Me.CheckedListBox1.SetItemChecked(Me.CheckedListBox1.CheckedIndices(0),False) Next i End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click If CheckedListBox1.CheckedItems.Count <> 0 Then Dim reader As New System.IO.StreamReader(CurDir() & "\" & "READ.txt") Dim x As Integer Dim s As String = "" For x = 0 To CheckedListBox1.CheckedItems.Count - 1 s = s & "Answer " & (x + 1).ToString & ") " & CheckedListBox1.CheckedItems(x).ToString & ControlChars.CrLf & reader.ReadLine() & ControlChars.CrLf & ControlChars.CrLf Next x Answer.Text = (s) Else MessageBox.Show("Please select questions.", "Error", _ MessageBoxButtons.OK, _ MessageBoxIcon.Information) Return End If End Sub End Class So lets say i 'check' the first, second, and fifth items from the checked listbox, i want it to read from the text file the first, second, and fifth lines of text and show them in the textbox. The current code just reads line 1, 2, 3 (...) in order, no matter what item i have 'checked'. Thanks in advance!

    Read the article

  • rails named_scope issue with eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. Moreover, while the User model is usually used in conjunction with the Profile model, I would like the flexibility to be able specify this using the :include = :profile syntax. I have the following User named_scope: named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } This works as expected when just reference the User model: >> User.visible.map(&:username).flatten => ["user a", "user b", "user c", "user d"] However, when I attempt to include the Profile model: User.visible(:include=> :profiles).profile.map(&:full_name).flatten I get an error that reads: NoMethodError: undefined method `profile' for #<User:0x1030bc828> Am I able to cross model-collection boundaries in this manner?

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • How to get application context path in spring-ws?

    - by Dhaliwal
    I am using Spring-WS to create a webservice. In my project, I have created a Helper class to reads sample response and request xml file which are located in my /src/main/resource folder. When I am unit-testing my webservice application 'locally', I use the System.getProperty("user.dir") to get the application context folder. The following is a method that I created in the Helper class to help me retrieve the file that I am interested in my resource folder. public static File getFileFromResources(String filename) { System.out.println("Getting file from resource folder"); File request = null; String curDir = System.getProperty("user.dir"); String contextpath = "C:\\src\\main\\resources\\"; request = new File(curDir + contextpath + filename); return request; } However, after 'publishing' the compiled WAR file to the ../webapps folder to the Apache Tomcat directory, I realise that System.getProperty("user.dir") no longer returns my application context path. Instead, it is returning the Apache Tomcat root directory as shown C:\Program Files\Apache Software Foundation\Tomcat 6.0\src\main\resources\SampleClientFile I cant seem to find any information about getting the root folder of my webservice. I have seen examples on Spring web application where I can retrieve the context path by using the following : request.getSession().getServletContext().getContextPath() But in this case, I am using a Spring web application where there is a servlet context in the request. But the Spring-WS, my entry point is an endpoint. How can I get the context path of my webservice application. I am expecting a context path of something like C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\clientWebService\WEB-INF\classes Could someone suggest a way to achieve this?

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >