Search Results

Search found 68536 results on 2742 pages for 'pst file'.

Page 638/2742 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • How can I add the "--watch" flag to this TextMate snippet?

    - by Jannis
    I love TextMate as my editor for all things web, and so I'd like to use a snippet to use it with style.less files to automatically take advantage of the .less way of compiling .css files on the fly using the native $ lessc {filepath} --watch as suggested in the less documentation (link) My (thanks to someone who wrote the LESS TM Bundle!) current TextMate snippet works well for writing the currently opened .less file to the .css file but I'd like to take advantage of the --watch parameter so that every change to the .less file gets automatically compiled into the .css file. This works well when using the Terminal command line for it, so I am sure it must be possible to use it in an adapted version of the current LESS Command for TextMate since that only invokes the command to compile the file. So how do I add the --watch flag to this command:? #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\"") I assume it should be something like: #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\" --watch") But doing so only crashes the TextMate.app. Any ideas would be much appreciated. Thanks for reading. Jannis

    Read the article

  • calling asp.net mvc action method using jquery post method expires the session

    - by nccsbim071
    hi, i have a website where i provicde a link. On clicking the link a controller action method is called to generate a zip file after creation of zip file is done, i show the link to download the zip file by replacing the link to create a zip with the link to download the zip. the problem is that after zip file creation is over and link is shown, when user clicks on the link to download the zip file, they are sent to login. After providing correct credentials in the login page they are prompted to download the zip file. they sould not be sent to the login page. In the action to generate zip file i haven't abondoned the session or haven't not done anything that abondons the session. the user should not be sen't to login page after successful creation of zip file user should be able to download the file without login. i search internet on this problem, but i did not find any solution. In one of the blog written by hanselman i found this statement that creates the problem with the session: Is some other thing like an Ajax call or IE's Content Advisor simultaneously hitting the default page or login page and causing a race condition that calls Session.Abandon? (It's happened before!) so i thought there might be some problem with ajax call that causes the session to expire, but i don't know what is happening? any help please thanks

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Django and Google App Engine Helper not finding the ipaddr module.

    - by Phil
    I'm trying to get Django running on GAE using this tutorial. When I run python manage.py runserver I get the stacktrace below. I'm new to both django and python so I don't know what my next steps are (This is Ubuntu Jaunty btw). It seems django isn't finding the GAE module ipaddr which comes with SDK 1.3.1. How do I get django to find this module? /home/username/bin/google_appengine/google/appengine/api/datastore_file_stub.py:40: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 /home/username/bin/google_appengine/google/appengine/api/memcache/__init__.py:31: DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha Traceback (most recent call last): File "manage.py", line 18, in <module> InstallAppengineHelperForDjango() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 543, in InstallAppengineHelperForDjango InstallDjangoModuleReplacements() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 260, in InstallDjangoModuleReplacements import django.db File "/home/username/Development/GAE/myapp/django/db/__init__.py", line 57, in <module> 'TIME_ZONE': settings.TIME_ZONE, File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 117, in __init__ self._setup_stubs() File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 128, in _setup_stubs from google.appengine.tools import dev_appserver_main File "/home/username/bin/google_appengine/google/appengine/tools/dev_appserver_main.py", line 82, in <module> from google.appengine.tools import appcfg File "/home/username/bin/google_appengine/google/appengine/tools/appcfg.py", line 53, in <module> from google.appengine.api import dosinfo File "/home/username/bin/google_appengine/google/appengine/api/dosinfo.py", line 25, in <module> import ipaddr ImportError: No module named ipaddr

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • How do I get MSDeploy to skip specific folders and file types in folders as CCNet task

    - by Simon Martin
    I want MSDeploy to skip specific folders and file types within other folders when using sync. Currently I'm using CCNet to call MSDeploy with the sync verb to take websites from a build to a staging server. Because there are files on the destination that are created by the application / user uploaded files etc, I need to exclude specific folders from being deleted on the destination. Also there are manifest files created by the site that need to remain on the destination. At the moment I've used -enableRule:DoNotDeleteRule but that leaves stale files on the destination. <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:iisApp="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:iisApp="$(website)/$(websiteFolder)" -enableRule:DoNotDeleteRule</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> I have tried to use the skip operation but run into problems. Initially I dropped the DoNotDeleteRule and replaced it with (multiple) skip <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:iisApp="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:iisApp="$(website)/$(websiteFolder)" -skip:objectName=dirPath,absolutePath="assets" -skip:objectName=dirPath,absolutePath="survey" -skip:objectName=dirPath,absolutePath="completion/custom/complete*.aspx" -skip:objectName=dirPath,absolutePath="completion/custom/surveylist*.manifest" -skip:objectName=dirPath,absolutePath="content/scorecardsupport" -skip:objectName=dirPath,absolutePath="Desktop/docs" -skip:objectName=dirPath,absolutePath="_TempImageFiles"</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> But this results in the following: Error: Source (iisApp) and destination (contentPath) are not compatible for the given operation. Error count: 1. So I changed from iisApp to contentPath and instead of dirPath,absolutePath just Directory like this: <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:contentPath="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:contentPath="$(website)/$(websiteFolder)" -skip:Directory="assets" -skip:Directory="survey" -skip:Directory="content/scorecardsupport" -skip:Directory="Desktop/docs" -skip:Directory="_TempImageFiles"</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> and this gives me an error: Illegal characters in path: < buildresults Info: Adding MSDeploy.contentPath (MSDeploy.contentPath). Info: Adding contentPath (C:\WWWRoot\MySite -skip:Directory=assets -skip:Directory=survey -skip:Directory=content/scorecardsupport -skip:Directory=Desktop/docs -skip:Directory=_TempImageFiles). Info: Adding dirPath (C:\WWWRoot\MySite -skip:Directory=assets -skip:Directory=survey -skip:Directory=content/scorecardsupport -skip:Directory=Desktop/docs -skip:Directory=_TempImageFiles). < /buildresults < buildresults Error: Illegal characters in path. Error count: 1. < /buildresults So I need to know how to configure this task so the folders referenced do not have their contents deleted in a sync and that that *.manifest and *.aspx files in the completion/custom folders are also skipped.

    Read the article

  • PHP: How do I loop through every XML file in a directory?

    - by celebritarian
    Hi! I'm building a simple application. It's a user interface to an online order system. Basically, the system is going to work like this: Other companies upload their purchase orders to our FTP server. These orders are simple XML files (containing things like customer data, address information, ordered products and the quantities…) I've built a simple user interface in HTML5, jQuery and CSS — all powered by PHP. PHP reads the content of an order (using the built-in features of SimpleXML) and displays it on the web page. So, it's a web app, supposed to always be running in a browser at the office. The PHP app will display the content of all orders. Every fifteen minutes or so, the app will check for new orders. How do I loop through all XML files in a directory? Right now, my app is able to read the content of a single XML file, and display it in a nice way on the page. My current code looks like this: // pick a random order that I know exists in the Order directory: $xml_file = file_get_contents("Order/6366246.xml",FILE_TEXT); $xml = new SimpleXMLElement($xml_file); // start echo basic order information, like order number: echo $xml->OrderHead->ShopPO; // more information about the order and the customer goes here… echo "<ul>"; // loop through each order line, and echo all quantities and products: foreach ($xml->OrderLines->OrderLine as $orderline) { echo "<tr>\n". "<li>".$orderline->Quantity." st.</li>\n". "<li>".$orderline->SKU."</li>\n"; } echo "</ul>"; // more information about delivery options, address information etc. goes here… So, that's my code. Pretty simple. It only needs to do one thing — print out the content of all order files on the screen — so me and my colleagues can see the order, confirm it and deliver it. That's it. But right now — as you can see — I'm selecting one single order at a time, located in the Order directory. But how do I loop through the entire Order directory, and read aand display the content of each order (like above)? I'm stuck. I don't really know how you get all (xml) files in a directory and then do something with the files (like reading them and echo out the data, like I want to). -- I'd really appreciate some help. I'm not very experienced with PHP/server-side programming, so if you could help me out here I'd be very grateful. Thanks a lot in advance! // Björn (celebritarian at me dot com)

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • Running Flask framework on App Engine: Could not find module app.cgi

    - by Linc
    I'm running this Flask example app on App Engine: http://github.com/gigq/flasktodo You can see on the github page that app.cgi is in the main directory for this project. However, when I run this code I get an error complaining about a missing app.cgi: ERROR 2010-05-01 16:43:47,006 dev_appserver.py:2109] Encountered error loading module "app.cgi": <type 'exceptions.ImportError'>: Could not find module app.cgi Traceback (most recent call last): File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 2096, in LoadTargetModule module_code = import_hook.get_code(module_fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1956, in get_code full_path, search_path, submodule = self.GetModuleInfo(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1908, in GetModuleInfo submodule, search_path = self.GetParentSearchPath(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1887, in GetParentSearchPath parent_package = self.GetParentPackage(fullname) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1279, in Decorate return func(self, *args, **kwargs) File "/opt/google_appengine/google/appengine/tools/dev_appserver.py", line 1864, in GetParentPackage raise ImportError('Could not find module %s' % fullname) ImportError: Could not find module app.cgi How do I indicate to dev_appserver.py where to look to find it?

    Read the article

  • Java and .net interoperability

    - by dineshrekula
    I have a c# program through which i am opening cmd window as a a process. in this command window i am running a batch file. i am redirecting the output of that batch file commands to a Text File. When i run my application everything seems to be ok. But few times, Application is giving some error like "Can't access the file. it's being used by another application" at the same time cmd window is not getting closed. If we close the cmd process through the Task Manager, then it's writing the content to the file and getting closed. Even though i closed the cmd process, still file handle is not getting released. so that i am not able to run the application next time onwards.Always it's saying Can't access the file. Only after restarting the system, it's working. Here is my code: Process objProcess = new Process(); ProcessStartInfo objProInfo = new ProcessStartInfo(); objProInfo.WindowStyle = ProcessWindowStyle.Maximized; objProInfo.UseShellExecute = true; objProInfo.FileName = "Batch file path" objProInfo.Arguments = "Some Arguments"; if (Directory.Exists(strOutputPath) == false) { Directory.CreateDirectory(strOutputPath); } objProInfo.CreateNoWindow = false; objProcess.StartInfo = objProInfo; objProcess.Start(); objProcess.WaitForExit(); test.bat: java classname argument > output.txt Here is my question: I am not able to trace where the problem is.. How we can see the process which holding handle on ant file. Is there any suggestions for Java and .net interoperability

    Read the article

  • Problem with python urllib

    - by mudder
    I'm getting an error when ever I try to pull down a web page with urllib.urlopen. I've disabled windows firewall and my AV so its not that. I can access the pages in my browser. I even reinstalled python to rule out it being a broken urllib. Any help would be greatly appreciated. >>> import urllib >>> h = urllib.urlopen("http://www.google.com").read() Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> h = urllib.urlopen("http://www.google.com").read() File "C:\Python26\lib\urllib.py", line 86, in urlopen return opener.open(url) File "C:\Python26\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python26\lib\urllib.py", line 344, in open_http h.endheaders() File "C:\Python26\lib\httplib.py", line 904, in endheaders self._send_output() File "C:\Python26\lib\httplib.py", line 776, in _send_output self.send(msg) File "C:\Python26\lib\httplib.py", line 735, in send self.connect() File "C:\Python26\lib\httplib.py", line 716, in connect self.timeout) File "C:\Python26\lib\socket.py", line 514, in create_connection raise error, msg IOError: [Errno socket error] [Errno 10061] No connection could be made because the target machine actively refused it >>>

    Read the article

  • Awk filtering values between two files when regions intersect (any solutions welcome)

    - by user964689
    This is building upon an earlier question Awk conditional filter one file based on another (or other solutions) I have an awk program that outputs a column from rows in a text file 'refGene.txt if values in that row match 2 out of 3 values in another text file. I need to include an additional criteria for finding a match between the two files. The criteria is inclusion if the range of the 2 numberical values specified in each row in file 1 overlap with the range of the two values in a row in refGene.txt. An example of a line in File 1: chr1 10 20 chr2 10 20 and an example line in file 2(refGene.txt) of the matching columns ($3, $5, $ 6): chr1 5 30 Currently the awk program does not treat this as a match because although the first column matches neither the 2nd or 3rd columns do no. But I would like a way to treat this as a match because the region 10-20 in file 1 is WITHIN the range of 5-30 in refGene.txt. However the second line in file 1 should NOT match because the first column does not match, which is necessary. If there is a way to include cases when any of the range in file 1 overlaps with any of the range in refGene.txt that would be really helpful. It should also replace the below conditional statements as it would also find all the cases currently described below. Please let me know if my question is unclear. Any help is really appreciated, thanks it advance! (solutions do not have to be in awk) Rubal FILES=/files/*txt for f in $FILES ; do awk ' BEGIN { FS = "\t"; } FILENAME == ARGV[1] { pair[ $1, $2, $3 ] = 1; next; } { if ( pair[ $3, $5, $6 ] == 1 ) { print $13; } } ' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ; done

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • C# Serialization lock out

    - by Greycrow
    When I try to Serialize a class to an xml file I get the exception: The process cannot access the file 'C:\settings.xml' because it is being used by another process. Settings currentSettings = new Settings(); public void LoadSettings() { //Load Settings from XML file try { Stream stream = File.Open("settings.xml", FileMode.Open); XmlSerializer s = new XmlSerializer(typeof(Settings)); currentSettings = (Settings)s.Deserialize(stream); stream.Close(); } catch //Can't read XML - use default settings { currentSettings.Name = GameSelect.Items[0].ToString(); currentSettings.City = MapSelect.Items[0].ToString(); currentSettings.Country = RaceSelect.Items[0].ToString(); } } public void SaveSettings() { //Save Settings to XML file try { Stream stream = File.Open("settings.xml", FileMode.Create); XmlSerializer x = new XmlSerializer(typeof(Settings)); x.Serialize(stream, currentSettings); stream.Close(); } catch { MessageBox.Show("Unable to open XML File - File in use by other process"); } It appears that when I Deserialize it locks the file for writing back, even if I closed the stream. Thanks in advance.

    Read the article

  • Take Current Snapshot of DB and send it to FTP in same PHP Scripts: Advice Needed

    - by Rachel
    Not sure if I can do it this way. I want to get current snapshot of the database and send it via FTP Server, both of this functionality should be implemented in PHP scripts. Here are the steps I am thinking on right now. In my php scripts(basically am extending an PDO into my Dao class and then preparing the query), $qry = SELECT * FROM MyTablename; $stmt = $this->prepare($qry); $stmt = $this->execute(); Now I will store $stmt in csv file using fputcsv or I will execute the sql command from the script itself and than try to store the result in the $file(csv file) note here that I do not have any csv file with me at this point to basically I will have to create one and let's say its $file, so then $file = fputcsv($stmt); or $file = exec("Select * from MyTablename"); Will this put all records in the file ? If yes, then I will use FTP Functionality to transfer file to the FTP Folder. I am not sure if this approach would work and also have concerns regarding the need of preparing the $qry Any suggestions or different approach advised would be highly appreciated. Thanks !!!

    Read the article

  • g++ on MacOSX doesn't work with -arch ppc64

    - by Albert
    I am trying to build a Universal binary on MacOSX with g++. However, it doesn't really work. I have tried with this simple dummy code: #include <iostream> using namespace std; int main() { cout << "Hello" << endl; } This works fine: % g++ test.cpp -arch i386 -arch ppc -arch x86_64 -o test % file test test: Mach-O universal binary with 3 architectures test (for architecture i386): Mach-O executable i386 test (for architecture ppc7400): Mach-O executable ppc test (for architecture x86_64): Mach-O 64-bit executable x86_64 However, this does not: % g++ test.cpp -arch i386 -arch ppc -arch x86_64 -arch ppc64 -o test In file included from test.cpp:1: /usr/include/c++/4.2.1/iostream:44:28: error: bits/c++config.h: No such file or directory In file included from /usr/include/c++/4.2.1/ios:43, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from test.cpp:1: /usr/include/c++/4.2.1/iosfwd:45:29: error: bits/c++locale.h: No such file or directory /usr/include/c++/4.2.1/iosfwd:46:25: error: bits/c++io.h: No such file or directory In file included from /usr/include/c++/4.2.1/bits/ios_base.h:45, from /usr/include/c++/4.2.1/ios:48, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from test.cpp:1: /usr/include/c++/4.2.1/ext/atomicity.h:39:23: error: bits/gthr.h: No such file or directory /usr/include/c++/4.2.1/ext/atomicity.h:40:30: error: bits/atomic_word.h: No such file or directory ... Any idea why that is? I have installed Xcode 3.2.2 with all SDKs it comes with.

    Read the article

  • Play Video File in Asp. Net 3.5 in IIS

    - by Sneha Joshi
    I have developed an application to upload a video on the server and then play it. It runs well when i execute it in Visual Studio 2008 in-built web server.. But when I configure it on IIS, the video does not play... Is there any settings needed in IIS to play video ?? The code of Button Click event - **protected void btnPlayVideo_Click(object sender, EventArgs e) { try { string himaSagarURL = this.lnkbtnVideo.Text; bool isFullSize = false; this.Literal1.Text = this.Play_Video(himaSagarURL, isFullSize); } catch (Exception ex) { this.Response.Write(ex.ToString()); } } This button click event calls the Play_Video method which is given below.. The code I used for embedding - private string Play_Video(string sagarURL, bool isFullSize) { string himaSagarObject = ""; sagarURL = sagarURL + ""; sagarURL = sagarURL.Trim(); if (sagarURL.Length > 0) { //Continue. } else { throw new System.ArgumentNullException("sagarURL"); } string himaSagarWidthAndHeight = ""; if (isFullSize) { himaSagarWidthAndHeight = ""; } else { himaSagarWidthAndHeight = "width='640' height='480'"; } himaSagarObject = himaSagarObject + "<object classid='CLSID:22D6F312-B0F6-11D0-94AB-0080C74C7E95' id='player' " + himaSagarWidthAndHeight + " standby='Please wait while the object is loaded...'>"; himaSagarObject = himaSagarObject + "<param name='url' value='" + sagarURL + "' />"; himaSagarObject = himaSagarObject + "<param name='src' value='" + sagarURL + "' />"; himaSagarObject = himaSagarObject + "<param name='AutoStart' value='true' />"; himaSagarObject = himaSagarObject + "<param name='Balance' value='0' />"; //-100 is fully left, 100 is fully right. himaSagarObject = himaSagarObject + "<param name='CurrentPosition' value='0' />"; //Position in seconds when starting. himaSagarObject = himaSagarObject + "<param name='showcontrols' value='true' />"; //Show play/stop/pause controls. himaSagarObject = himaSagarObject + "<param name='enablecontextmenu' value='true' />"; //Allow right-click. himaSagarObject = himaSagarObject + "<param name='fullscreen' value='" + isFullSize.ToString() + "' />"; //Start in full screen or not. himaSagarObject = himaSagarObject + "<param name='mute' value='false' />"; himaSagarObject = himaSagarObject + "<param name='PlayCount' value='1' />"; //Number of times the content will play. himaSagarObject = himaSagarObject + "<param name='rate' value='1.0' />"; //0.5=Slow, 1.0=Normal, 2.0=Fast himaSagarObject = himaSagarObject + "<param name='uimode' value='full' />"; // full, mini, custom, none, invisible himaSagarObject = himaSagarObject + "<param name='showdisplay' value='true' />"; //Show or hide the name of the file. himaSagarObject = himaSagarObject + "<param name='volume' value='50' />"; // 0=lowest, 100=highest himaSagarObject = himaSagarObject + "</object>"; return himaSagarObject; }**

    Read the article

  • What is the easiest way to read wav-files using Python [summary]?

    - by Roman
    I want to use Python to access a wav-file and write its content in a form which allows me to analyze it (let's say arrays). I heard that "audiolab" is a suitable tool for that (it transforms numpy arrays into wav and vica versa). I have installed the "audiolab" but I had a problem with the version of numpy (I could not "from numpy.testing import Tester"). I had 1.1.1. version of numpy. I have installed a newer version on numpy (1.4.0). But then I got a new set of errors: Traceback (most recent call last): File "test.py", line 7, in import scikits.audiolab File "/usr/lib/python2.5/site-packages/scikits/audiolab/init.py", line 25, in from pysndfile import formatinfo, sndfile File "/usr/lib/python2.5/site-packages/scikits/audiolab/pysndfile/init.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings File "numpy.pxd", line 30, in scikits.audiolab.pysndfile._sndfile (scikits/audiolab/pysndfile/_sndfile.c:9632) ValueError: numpy.dtype does not appear to be the correct type object I gave up to use audiolab and thought that I can use "wave" package to read in a wav-file. I asked a question about that but people recommended to use scipy instead. OK, I decided to focus on scipy (I have 0.6.0. version). But when I tried to do the following: from scipy.io import wavfile x = wavfile.read('/usr/share/sounds/purple/receive.wav') I get the following: Traceback (most recent call last): File "test3.py", line 4, in <module> from scipy.io import wavfile File "/usr/lib/python2.5/site-packages/scipy/io/__init__.py", line 23, in <module> from numpy.testing import NumpyTest ImportError: cannot import name NumpyTest So, I gave up to use scipy. Can I use just wave package? I do not need much. I just need to have content of wav-file in human readable format and than I will figure out what to do with that.

    Read the article

  • Why won't SWFUpload execute the upload.aspx code, and why is it saving all files to the root directo

    - by Nathan Fast
    I am using SWFUpload v2.2. In IE (8):   If I upload a very tiny file (16kb):      1) The file appears in the root directory where upload.aspx is located.      2) Page_Load on upload.aspx.cs is executed.      3) The file is actually processed by the Page_Load procedure, and the processed file is saved in the correct location.   If I upload a normal file (1.5 MB):      1) The file appears in the root directory where upload.aspx is located. In Firefox (3.5.7):   No matter what size the file is, it:      1) The file appears in the root directory where upload.aspx is located. I have maxRequestLength="30000" executionTimeout="3000" in the web.config just to be sure. In the setting object for the constructor I have:   file_size_limit: "10 MB",   file_types: ".",   file_types_description: "All Files", So my questions are:   How is the file getting saved in the root directory (and why)?   Why does Page_Load only execute when I am using IE and uploading very tiny files?

    Read the article

  • Django generic relation field reports that all() is getting unexpected keyword argument when no args

    - by Joshua
    I have a model which can be attached to to other models. class Attachable(models.Model): content_type = models.ForeignKey(ContentType) object_pk = models.TextField() content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk") class Meta: abstract = True class Flag(Attachable): user = models.ForeignKey(User) flag = models.SlugField() timestamp = models.DateTimeField() I'm creating a generic relationship to this model in another model. flags = generic.GenericRelation(Flag) I try to get objects from this generic relation like so: self.flags.all() This results in the following exception: >>> obj.flags.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 105, in all return self.get_query_set() File "/usr/local/lib/python2.6/dist-packages/django/contrib/contenttypes/generic.py", line 252, in get_query_set return superclass.get_query_set(self).filter(**query) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 498, in filter return self._filter_or_exclude(False, *args, **kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 516, in _filter_or_exclude clone.query.add_q(Q(*args, **kwargs)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1675, in add_q can_reuse=used_aliases) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1569, in add_filter negate=negate, process_extras=process_extras) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1737, in setup_joins "Choices are: %s" % (name, ", ".join(names))) FieldError: Cannot resolve keyword 'object_id' into field. Choices are: content_type, flag, id, nestablecomment, object_pk, timestamp, user >>> obj.flags.all(object_pk=obj.pk) Traceback (most recent call last): File "<console>", line 1, in <module> TypeError: all() got an unexpected keyword argument 'object_pk' What have I done wrong?

    Read the article

  • JPA joined column allow every value...

    - by Fabio Beoni
    I'm testing JPA, in a simple case File/FileVersions tables (Master/Details), with OneToMany relation, I have this problem: in FileVersions table, the field "file_id" (responsable for the relation with File table) accepts every values, not only values from File table. How can I use the JPA mapping to limit the input in FileVersion.file_id only for values existing in File.id? My class are File and FileVersion: FILE CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="FILE_ID") private Long id; @Column(name="NAME", nullable = false, length = 30) private String name; //RELATIONS ------------------------------------------- @OneToMany(mappedBy="file", fetch=FetchType.EAGER) private Collection <FileVersion> fileVersionsList; //----------------------------------------------------- FILEVERSION CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="VERSION_ID") private Long id; @Column(name="FILENAME", nullable = false, length = 255) private String fileName; @Column(name="NOTES", nullable = false, length = 200) private String notes; //RELATIONS ------------------------------------------- @ManyToOne(fetch=FetchType.EAGER) @JoinColumn(name="FILE_ID", referencedColumnName="FILE_ID", nullable=false) private File file; //----------------------------------------------------- and this is the FILEVERSION TABLE CREATE TABLE `JPA-Support`.`FILEVERSION` ( `VERSION_ID` bigint(20) NOT NULL AUTO_INCREMENT, `FILENAME` varchar(255) NOT NULL, `NOTES` varchar(200) NOT NULL, `FILE_ID` bigint(20) NOT NULL, PRIMARY KEY (`VERSION_ID`), KEY `FK_FILEVERSION_FILE_ID` (`FILE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=4 DEFAULT CHARSET=latin1

    Read the article

  • Is this XSLT correct for the XML file which I have developed?

    - by atrueguy
    This is my XML file. I need to develop a xslt for this. <?xml version="1.0" encoding="ISO-8859-1"?> <!--<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">--> <!-- Generator: Arbortext IsoDraw 7.0 --> <svg width="100%" height="100%" viewBox="0 0 214.819 278.002"> <g id="Standard_x0020_layer"/> <g id="Catalog"> <line stroke-width="0.353" stroke-linecap="butt" x1="5.839" y1="262.185" x2="209.039" y2="262.185"/> <text transform="matrix(0.984 0 0 0.93 183.515 265.271)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">© 2009 k Co.</text> <text transform="matrix(0.994 0 0 0.93 7.235 265.3)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">087156-8-</text> <text transform="matrix(0.995 0 0 0.93 21.708 265.357)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174" font-weight="bold">AB</text> <path stroke-width="0.088" stroke-linecap="butt" stroke-dasharray="2.822 1.058" d="M162.037 107.578L174.439 100.417L180.698 104.03"/> <g id="AUTOID_20445" class="52971"> <line stroke-width="0.088" stroke-linecap="butt" x1="68.859" y1="43.621" x2="65.643" y2="45.399"/> <text transform="matrix(0.944 0 0 0.93 69.165 43.356)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="2.775" font-weight="bold">52971</text> </g> </g> </svg> I have developed a XSLT for this in this way, but I am failing to produce the desired output can any one help me in this. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:svg="http://www.w3.org/2000/svg"> <xsl:template match="/"> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> <fo:layout-master-set> <fo:simple-page-master master-name="simple" page-height="11in" page-width="8.5in"> <fo:region-body margin="0.7in" margin-top="1.15in" margin-left=".8in"/> <fo:region-before extent="1.5in"/> <fo:region-after extent="1.5in"/> <fo:region-start extent="1.5in"/> <fo:region-end extent="1.5in"/> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="simple"> <fo:flow flow-name="xsl-region-body"> <fo:block> <fo:instream-foreign-object> <svg:svg xmlns:svg="http://www.w3.org/2000/svg" height="100%" width="100%" viewBox="0 0 214.819 278.002"> <xsl:for-each select="svg/g"> <svg:g style="stroke:none;fill:#000000;"> <svg:path> <xsl:variable name="s"> <xsl:value-of select="translate(@d,' ','')"/> </xsl:variable> <xsl:attribute name="d"><xsl:value-of select="translate($s,',',' ')"/></xsl:attribute> </svg:path> </svg:g> </xsl:for-each> <xsl:for-each select="svg/g"> <svg:line x1 = "{$x1}" y1 = "{$y1}" x2 = "{$x2}" y2 = "{$y2}" style = "stroke-width: 0.088; stroke: black;"/> <line xmlns="http://www.w3.org/2000/svg" x1="{$x1}" y1="{$y1}" x2="{$x2}" y2="{$y2}" stroke-width="0.088" stroke="black" fill="#000000" /> </xsl:for-each> </svg:svg> </fo:instream-foreign-object> </fo:block> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> </xsl:stylesheet> Please help me in this

    Read the article

  • Ruby : UTF-8 IO

    - by subtenante
    I use ruby 1.8.7. I try to parse some text files containing greek sentences, encoded in UTF-8. (I can't much paste here sample files, because they are subject to copyright. Really just some greek text encoded in UTF-8.) I want, for each file, to parse the file, extract all the words, and make a list of each new word found in this file. All that saved to one big index file. Here is my code : #!/usr/bin/ruby -KU def prepare_line(l) l.gsub(/^\s*[ST]\d+\s*:\s*|\s+$|\(\d+\)\s*/u, "") end def tokenize(l) l.split /['·.;!:\s]+/u end $dict = {} $cpt = 0 $out = File.new 'out.txt', 'w' def lesson(file) $cpt = $cpt + 1 file.readlines.each do |l| $out.puts l l = prepare_line l tokenize(l).each do |t| unless $dict[t] $dict[t] = $cpt $out.puts " #{t}\n" end end end end Dir.new('etc/').each do |filename| f = File.new("etc/#{filename}") unless File.directory? f lesson f end end Here is part of my output : ?@???†?†?????????? ?...[snip very long hangul/hanzi mishmash]... ????????†? ???N2 : ?e?te?? (2) µ???µa (Note that the puts l part seems to work fine, at the end of the given output line.) Any idea what is wrong with my code ? (General comments about ruby idioms I could use are very welcome, I'm really a beginner.)

    Read the article

  • Carriage Return\Line feed in Java

    - by Manu
    Guys, i have created text file in unix enviroment using java code. For writing the text file i am using java.io.FileWriter and BufferedWriter. and for newline after each row i am using bw.newLine() method. (where bw is object of BufferedWriter ) and sending that text file by attaching in mail from unix environment itself (automated that using unix commands). My issue is, after i download the text file from mail in windows system, if i opened that text file the data's are not properly aligned. newline() character is not working i think so. I want same text file alignment as it is in unix environment, if i opened the text file in windows environment also. How to resolve the problem.Please help ASAP. Thanks for your help in advance. pasting my java code here for your reference..(running java code in unix environment) File f = new File(strFileGenLoc); BufferedWriter bw = new BufferedWriter(new FileWriter(f, false)); rs = stmt.executeQuery("select * from jpdata"); while ( rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.newLine(); }

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >