Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 538/1620 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Recursing data into a 2 dimensional array in PHP 5

    - by user315699
    I'm getting bamboozled by "for each" loops and two dimensional arrays, and I'm a php newb so please bear with me (and ignore any variables with the word "image" - it's all about the mp3s, I just didn't change it from the xml tutorial) I found a php function on the net that list files in a directory, the output of which is: Array ( [0] = audio/1.mp3 [1] = audio/2.mp3 [2] = audio/3.mp3 [3] = audio/4.mp3 [4] = audio/5.mp3 ) As expected. And another that lists some info about mp3 files. $mp3datafile = 'audio/1.mp3'; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); The output of which is Array ( [Filesize] = 31972 [Encoding] = CBR [etc] ) In order to automatically build RSS for a podcast, I need to generate XML for each item. So far so good. This is how I'm making the xml foreach ($imagearray as $key = $val) { $tnsoundfile = $xml_generator-addChild('item'); $tnsoundfile-addChild('title', $podcasttitle); $enclosure = $tnsoundfile-addChild('enclosure'); $enclosure-addAttribute('url', $val); // that's the filename $enclosure-addAttribute('length', $mp3dataArray[Filesize]); // << Length is file length, not time. But later I also need $mp3dataArray[Length mm:ss] for duration tag. $enclosure-addAttribute('type', 'audio/mpeg'); $tnsoundfile-addChild('guid', $path_to_image_dir.'/'.$val); } (The above has been truncated, I realise it's not proper xml right now, but it was just to show what was going on). Perfect. But I need to do it for as many files as there are in the directory. So, I have an array of the names of the files in the directory in $mp3data And, I have an array of mp3 data in $mp3dataArray from one iteration of the get_metadata() function. If I do the following, then I get a nice list of the mp3 data of the 5 files in the directory: foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); } As expected. Where I'm struggling, and have been for most of the day in spite of reading many forums and tutorials, is how to populate the "second dimension" of the array, so that it goes through 1,2,3,4 and 5.mp3 (or however many there are), extracts the metadata, then allows me to use it in the xml section above. Here's what I have foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); $mp3testarray = array($mp3dataArray); } print_r($mp3dataArray); Shouldn't that line print_r($mp3dataArray); give me a nice list of 5 lots of mp3 data, in the way it did when I recursed through the loop as before? Cos this is driving me nuts! It must be something so simple, and any help would be greatly appreciated. Thank you in advance.

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • How can i get more than one jpg. or txt file from any folder?

    - by Phsika
    Dear Sirs; i have two Application to listen network Stream : Server.cs on the other hand; send file Client.cs. But i want to send more files on a stream from any folder. For example. i have C:/folder whish has got 3 jpg files. My client must run. Also My server.cs get files on stream: Client.cs: private void btn_send2_Click(object sender, EventArgs e) { string[] paths= null; paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); byte[] Dizi; TcpClient Gonder = new TcpClient("127.0.0.1", 51124); FileStream Dosya; FileInfo Dos; NetworkStream Akis; foreach (string path in paths) { Dosya = new FileStream(path , FileMode.OpenOrCreate); Dos = new FileInfo(path ); Dizi = new byte[(int)Dos.Length]; Dosya.Read(Dizi, 0, (int)Dos.Length); Akis = Gonder.GetStream(); Akis.Write(Dizi, 0, (int)Dosya.Length); Gonder.Close(); Akis.Flush(); Dosya.Close(); } } Also i have Server.cs void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; // string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); // burasi degisecek string pathfolder = saveFileDialog1.FileName; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } Please look Client.cs: icollected all files from "c:\folder" paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); My Server.cs how to get all files from stream?

    Read the article

  • UnauthorizedAccessException on MemoryMappedFile in C# 4.

    - by Kevin Nisbet
    Hey, I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person? The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked. using System.IO.MemoryMappedFiles; System.IO.FileInfo fi = new System.IO.FileInfo(@"C:\testparsercap.pcap"); MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity(); System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite); MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true); MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor(); PcapHeader head = new PcapHeader(); FileMapView.Read<PcapHeader>(0, out head); I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line. I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled. If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0 Thanks!

    Read the article

  • bind a WPF datagrid to a datatable

    - by Jim Thomas
    I have used the marvelous example posted at: http://www.codeproject.com/KB/WPF/WPFDataGridExamples.aspx to bind a WPF datagrid to a datatable. The source code below compiles fine; it even runs and displays the contents of the InfoWork datatable in the wpf datagrid. Hooray! But the WPF page with the datagrid will not display in the designer. I get an incomprehensible error instead on my design page which is shown at the end of this posting. I assume the designer is having some difficulty instantiating the dataview for display in the grid. How can I fix that? XAML Code: xmlns:local="clr-namespace:InfoSeeker" <Window.Resources> <ObjectDataProvider x:Key="InfoWorkData" ObjectType="{x:Type local:InfoWorkData}" /> <ObjectDataProvider x:Key="InfoWork" ObjectInstance="{StaticResource InfoWorkData}" MethodName="GetInfoWork" /> </Window.Resources> <my:DataGrid DataContext="{Binding Source={StaticResource InfoWork}}" AutoGenerateColumns="True" ItemsSource="{Binding}" Name="dataGrid1" xmlns:my="http://schemas.microsoft.com/wpf/2008/toolkit" /> C# Code: namespace InfoSeeker { public class InfoWorkData { private InfoTableAdapters.InfoWorkTableAdapter infoAdapter; private Info infoDS; public InfoWorkData() { infoDS = new Info(); infoAdapter = new InfoTableAdapters.InfoWorkTableAdapter(); infoAdapter.Fill(infoDS.InfoWork); } public DataView GetInfoWork() { return infoDS.InfoWork.DefaultView; } } } Error shown in place of the designer page which has the grid on it: An unhandled exception has occurred: Type 'MS.Internal.Permissions.UserInitiatedNavigationPermission' in Assembly 'PresentationFramework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' is not marked as serializable. at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter) ...At:Ms.Internal.Designer.DesignerPane.LoadDesignerView()

    Read the article

  • Why am I getting a " instance has no attribute '__getitem__' " error?

    - by Kevin Yusko
    Here's the code: class BinaryTree: def __init__(self,rootObj): self.key = rootObj self.left = None self.right = None root = [self.key, self.left, self.right] def getRootVal(root): return root[0] def setRootVal(newVal): root[0] = newVal def getLeftChild(root): return root[1] def getRightChild(root): return root[2] def insertLeft(self,newNode): if self.left == None: self.left = BinaryTree(newNode) else: t = BinaryTree(newNode) t.left = self.left self.left = t def insertRight(self,newNode): if self.right == None: self.right = BinaryTree(newNode) else: t = BinaryTree(newNode) t.right = self.right self.right = t def buildParseTree(fpexp): fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in '+-*/)': currentTree.setRootVal(eval(i)) parent = pStack.pop() currentTree = parent elif i in '+-*/': currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: print "error: I don't recognize " + i return eTree def postorder(tree): if tree != None: postorder(tree.getLeftChild()) postorder(tree.getRightChild()) print tree.getRootVal() def preorder(self): print self.key if self.left: self.left.preorder() if self.right: self.right.preorder() def inorder(tree): if tree != None: inorder(tree.getLeftChild()) print tree.getRootVal() inorder(tree.getRightChild()) class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) def main(): parseData = raw_input( "Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) " ) tree = buildParseTree(parseData) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + preorder(tree)) print( "The post order is: ", + inorder(tree)) main() And here is the error: Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) ( 1 + 2 ) Traceback (most recent call last): File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 108, in main() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 102, in main tree = buildParseTree(parseData) File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 46, in buildParseTree currentTree = currentTree.getLeftChild() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 15, in getLeftChild return root[1] AttributeError: BinaryTree instance has no attribute '__getitem__'

    Read the article

  • Using Freepascal\Lazarus JSON Libraries

    - by Gizmo_the_Great
    I'm hoping for a bit of a "simpletons" demo\explanation for using Lazarus\Freepascal JSON parsing. I've asked a question here but all the replies are "read this" and none of them are really helping me get a grasp because the examples are bit too in-depth and I'm seeking a very simple example to help me understand how it works. In brief, my program reads an untyped binary file in chunks of 4096 bytes. The raw data then gets converted to ASCII and stored in a string. It then goes through the variable looking for certain patterns, which, it turned out, are JSON data structures. I've currently coded the parsing the hard way using Pos and ExtractANSIString etc. But I'vesince learnt that there are JSON libraries for Lazarus & FPC, namely fcl-json, fpjson, jsonparser, jsonscanner etc. https://bitbucket.org/reiniero/fpctwit/src http://fossies.org/unix/misc/fpcbuild-2.6.0.tar.gz:a/fpcbuild-2.6.0/fpcsrc/packages/fcl-json/src/ http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/packages/fcl-json/examples/ However, I still can't quite work out HOW I read my string variable and parse it for JSON data and then access those JSON structures. Can anyone give me a very simple example, to help getting me going? My code so far (without JSON) is something like this: try SourceFile.Position := 0; while TotalBytesRead < SourceFile.Size do begin BytesRead := SourceFile.Read(Buffer,sizeof(Buffer)); inc(TotalBytesRead, BytesRead); StringContent := StripNonAsciiExceptCRLF(Buffer); // A custom function to strip out binary garbage leaving just ASCII readable text if Pos('MySearchValue', StringContent) > 0 then begin // Do the parsing. This is where I need to do the JSON stuff ...

    Read the article

  • WCF net.tcp bindings, message formats and security questions

    - by RemotecUk
    Hi, sorry for the stupid questions but there are just some things about WCF I cant get my head around. Would be greatful for some advice on the following.... At a very basic level is it correct that WCF uses either Binary (Net.Tcp), HTTP or MSMQ to transfer my message on the wire? However is it true that in all cases, regardless of how the data is transferred the message itself in in the SOAP format with headers and a body? So its a sort of XML message that is transmitted in either HTTP/S or in a binary format. Is Net.Tcp a good choice for my client server app - its similar to a messenger app in that the clients are all remote users on the other side of the firewall to my server. Most things I am reading are telling to use WS* and HTTP. Is Net.Tcp secured by standard and without certificates? - that is - people cannot listen on the wire and decode the data thats going to and from. Is it possible to send a username and password using net.tcp and without an installed certificate? If so I presume I can hook this up to my membership provider and authenticate access to each method on my service contract implementation. I presume that with username and password security, the proxy is initialised with the username and password and that this information is is sent with every request. Then my membership provider will be invoked for each method call and do whatever it needs to do to get the authorisation for the method. Sorry for the dump of questions but would be great to know if Im thinking the right way about how WCF works. Thanks.

    Read the article

  • Problem converting a byte array into datatable.

    - by kranthi
    Hi, In my aspx page I have a HTML inputfile type which allows user to browse for a spreadsheet.Once the user choses the file to upload I want to read the content of the spreadsheet and store the content into mysql database table. I am using the following code to read the content of the uploaded file and convert it into a datatable in order into insert it into database table. if (filMyFile.PostedFile != null) { // Get a reference to PostedFile object HttpPostedFile myFile = filMyFile.PostedFile; // Get size of uploaded file int nFileLen = myFile.ContentLength; // make sure the size of the file is > 0 if (nFileLen > 0) { // Allocate a buffer for reading of the file byte[] myData = new byte[nFileLen]; // Read uploaded file from the Stream myFile.InputStream.Read(myData, 0, nFileLen); DataTable dt = new DataTable(); MemoryStream st = new MemoryStream(myData); st.Position = 0; System.Runtime.Serialization.IFormatter formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); dt=(DataTable)formatter.Deserialize(st); } } But I am getting the following error when I am trying to deserialise the byte array into datatable. Binary stream '0' does not contain a valid BinaryHeader. Possible causes are invalid stream or object version change between serialization and deserialization. Could someone please tell me what am I doing wrong? I've also tried converting the bytearray into string ,then converting the string back to byte array and convert into datatable.That is also throwing the same error. Thanks.

    Read the article

  • Failure to register .dll with regsvr32 - only in Release build.

    - by Hendrik
    Hi, I'm having a weird problem when trying to register the .dll i created using regsvr32. During development everything went fine, the debug version registers and works fine. Now i wanted to create a Release version, but that Version does not register anymore. regsvr32 comes up with the following error: The module "mpegsplitter.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified procedure could not be found. Some research brought me to the dependency walker, which does tell me this Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. It also does show a dependency on "crtdll.dll" that the debug version does not have (The function view shows soem functions that normally should be in ole32.dll), which is colored red'ish. So far so good, i guess its somehow related to what the dependency walker shows there. But where do i go from here? How do i fix it? Any help would be greatly appreciated, that has been keeping me busy for several hours already. Thanks!

    Read the article

  • Retrieving image from database using Ajax

    - by ama
    I'm trying to read image from database with Ajax, but I could not read the xmlhttp.responseText to the img src. The image is saved as binary data in database and also retrieved as binary data. I'm using Ajax in JSP, because I want to give the user the ability to upload images and I will view the last uploaded image, on mouse over action the Ajax will be activated and get the image back, the problem is in reading the img from the response. This is the Ajax function: function ajaxFunction(path) { if (xmlhttp) { var s = path; xmlhttp.open("GET", s, true); xmlhttp.onreadystatechange = handleServerResponse; xmlhttp.send(null); } } function handleServerResponse() { if (xmlhttp.readyState == 4) { var Image = document.getElementById(Image_Element_Name); document.getElementById(Image_Element_Name).src = "data:" + xmlhttp.responseText; } } I also got exception in the server: 10415315 [TP-Processor1] WARN core.MsgContext - Error sending end packet java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:537) at org.apache.jk.common.JkInputStream.endMessage(JkInputStream.java:127) at org.apache.jk.core.MsgContext.action(MsgContext.java:302) at org.apache.coyote.Response.action(Response.java:183) at org.apache.coyote.Response.finish(Response.java:305) at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:281) at org.apache.catalina.connector.Response.finishResponse(Response.java:478) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:154) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:200) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685) at java.lang.Thread.run(Thread.java:619) 10415316 [TP-Processor1] WARN common.ChannelSocket - processCallbacks status 2

    Read the article

  • Symbolicate adhoc iphone app crashes.

    - by gotye
    Hey everyone, I can't manage to make my code symbolicated ... I read the part "below" : Given a crash report, the matching binary, and its .dSYM file, symbolication is relatively easy. The Xcode Organizer window has a tab for crash reports of the currently selected device. You can view externally received crash reports in this tab - just place them in the appropriate directory. This is the same as the Mac OS X directory described in the first section. It doesn't matter which device you have tethered, but the directory in which you place the crash report must be the directory for the tethered and selected device. It is not necessary to place the binary and .dSYM file in any particular location. Xcode uses Spotlight and the UUID to locate the correct files. It is necessary, though, that both files be in the same directory and that this directory is one that is indexed by Spotlight. Anywhere in your home directory should be fine. But it doesn't work for me ... here is what I did : I opened xcode organizer and I had my iphone device with crash logs App and dsym files are in my xcode project which is on my desktop All the rest should be automatic, right ? but crash logs aren't symbolicated yet ... Any comments welcome. Cheers. Gotye.

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Qmake does not specify a valid qt

    - by Comptrol
    After installing Qt SDK for Open Source C++ development on Mac OS by following the respective steps Note for the binary package: If you have the binary package, simply double-click on the Qt.mpkg and follow the instructions to install Qt. . Yes, that is all I have done to install Qt on MacOsX. Everything was going fine, until I run a sample application, of which compile output resulted in: No valid Qt version set. Set one in Preferences Error while building project qtilk When executing build step 'QMake' Canceled build. Then I tried to change the respective Qt version in Preferences and I hovered over the Path, I realized my mkspec isn't set: Then I tried querying qmake by qmake -query : QT_INSTALL_PREFIX:/ QT_INSTALL_DATA:/usr/local/Qt4.6 QT_INSTALL_DOCS:/Developer/Documentation/Qt QT_INSTALL_HEADERS:/usr/include QT_INSTALL_LIBS:/Library/Frameworks QT_INSTALL_BINS:/Developer/Tools/Qt QT_INSTALL_PLUGINS:/Developer/Applications/Qt/plugins QT_INSTALL_TRANSLATIONS:/Developer/Applications/Qt/translations QT_INSTALL_CONFIGURATION:/Library/Preferences/Qt QT_INSTALL_EXAMPLES:/Developer/Examples/Qt/ QT_INSTALL_DEMOS:/Developer/Examples/Qt/Demos QMAKE_MKSPECS:/usr/local/Qt4.6/mkspecs QMAKE_VERSION:2.01a QT_VERSION:4.6.2 QMAKE_MKSPECS seems to be set here?? Will setting my mkspec solve my building problem? I tried setting by typing export mkspec=macx-g++. Still, mkspec seems not to be set to anything. I am all ears waiting for your answers. Thanks in advance.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • Bundle a Python app as a single file to support add-ons or extensions?

    - by Brandon Craig Rhodes
    There are several utilities — all with different procedures, limitations, and target operating systems — for getting a Python package and all of its dependencies and turning them into a single binary program that is easy to ship to customers: http://wiki.python.org/moin/Freeze http://www.pyinstaller.org/ http://www.py2exe.org/ http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html My situation goes one step further: third-party developers will be wanting to write plug-ins, extensions, or add-ons for my application. It is, of course, a daunting question how users on platforms like Windows would most easily install plugins or addons in such a way that my app can easily discover that they have been installed. But beyond that basic question is another: how can a third-party developer bundle their extension with whatever libraries the extension itself needs (which might be binary modules, like lxml) in such a way that the plugin's dependencies become available for import at the same time that the plugin becomes available. How can this be approached? Will my application need its own plug-in area on disk and its own plug-in registry to make this tractable? Or are there general mechanisms, that I could avoid writing myself, that would allow an app that is distributed as a single executable to look around and find plugins that are also installed as single files?

    Read the article

  • Java File IO Compendium

    - by Warren Taylor
    I've worked in and around Java for nigh on a decade, but have managed to ever avoid doing serious work with files. Mostly I've written database driven applications, but occasionally, even those require some file io. Since I do it so rarely, I end up googling around for quite some time to figure out the exact incantation that Java requires to read a file into a byte[], char[], String or whatever I need at the time. For a 'once and for all' list, I'd like to see all of the usual ways to get data from a file into Java, or vice versa. There will be a fair bit of overlap, but the point is to define all of the subtle different variants that are out there. For example: Read/Write a text file from/to a single String. Read a text file line by line. Read/Write a binary file from/to a single byte[]. Read a binary file into a byte[] of size x, one chunk at a time. The goal is to show concise ways to do each of these. Samples do not need to handle missing files or other errors, as that is generally domain specific. Feel free to suggest more IO tasks that are somewhat common and I have neglected to mention.

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • How to mimic built-in .NET serialization idioms?

    - by Matt Enright
    I have a library (written in C#) for which I need to read/write representations of my objects to disk (or to any Stream) in a particular binary format (to ensure compatibility with C/Java library implementations). The format requires a fair amount of bit-packing and some DEFLATE'd bytestreams. I would like my library, however, to be as idiomatic .NET as possible, however, and so would like to provide an API as close as possible to the normal binary serialization process. I'm aware of the ability to implement the IFormatter interface, but being that I really am unable to reuse any part of the built-in serialization stack, is it worth doing this, or will it just bring unnecessary overhead. In other words: Implement IFormatter and co. OR Just provide "Serialize"/"Deserialize" methods that act on a Stream? A good point brought up below about needing the serialization semantics for any case involving Remoting. In a case where using MarshalByRef objects is feasible, I'm pretty sure that this won't be an issue, so leaving that aside are there any benefits or drawbacks to using the ISerializable/IFormatter versus a custom stack (or, is my understanding remoting incorrectly)?

    Read the article

  • What am I doing wrong with HttpResponse content and headers when downloading a file?

    - by Slauma
    I want to download a PDF file from a SQL Server database which is stored in a binary column. There is a LinkButton on an aspx page. The event handler of this button looks like this: protected void LinkButtonDownload(object sender, EventArgs e) { ... byte[] aByteArray; // Read binary data from database into this ByteArray // aByteArray has the size: 55406 byte Response.ClearHeaders(); Response.ClearContent(); Response.BufferOutput = true; Response.AddHeader("Content-Disposition", "attachment; filename=" + "12345.pdf"); Response.ContentType = "application/pdf"; using (BinaryWriter aWriter = new BinaryWriter(Response.OutputStream)) { aWriter.Write(aByteArray, 0, aByteArray.Length); } } A "File Open/Save dialog" is offered in my browser. When I store this file "12345.pdf" to disk, the file has a size of 71523 Byte. The additional 16kB at the end of the PDF file are the HTML code of my page (as I can see when I view the file in an editor). I am confused because I was believing that ClearContent and ClearHeaders would ensure that the page content is not sent together with the file content. What am I doing wrong here? Thanks for help!

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Issues with reversing bit shifts that roll over the maximum byte size?

    - by Terri
    I have a string of binary numbers that was originally a regular string and will return to a regular string after some bit manipulation. I'm trying to do a simple caesarian shift on the binary string, and it needs to be reversable. I've done this with this method.. public static String cShift(String ptxt, int addFactor) { String ascii = ""; for (int i = 0; i < ptxt.length(); i+=8) { int character = Integer.parseInt(ptxt.substring(i, i+8), 2); byte sum = (byte) (character + addFactor); ascii += (char)sum; } String returnToBinary = convertToBinary(ascii); return returnToBinary; } This works fine in some cases. However, I think when it rolls over being representable by one byte it's irreversable. On the test string "test!22*F ", with an addFactor of 12, the string becomes irreversible. Why is that and how can I stop it?

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >