Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 679/1620 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Android:Where is exactly Android 1.6 SDK download?

    - by user187532
    Hello friends, I want install Android version 1.6 SDK. I already have Android development setup with Eclipse and Android 1.5 SDK. Wherever i search in Google to download Android 1.6 SDK, it finally goes to link: http://developer.android.com/intl/zh-CN/sdk/index.html This link has three setup SDK zip files, but no where mentioned what version of SDK setup are those? Why such confusions in this Android website for showing just version of SDK setup files? Where i can get exactly Android 1.6 SDK setup download? Could someone point out me clearly? Is there any special steps that i need to follow to overwrite 1.6 SDK with my existing setup environment? Thank you.

    Read the article

  • Imagemagick - File Naming

    - by Josh Crowder
    I am using the convert command to convert a pdf to multiple pngs, I need the naming conventions to be slide-##.png at the moment they come out like slide-1.png but because there is 20+ slides when I loop through them to add them into the model the order comes up wrong, so it looks like slide-1.png slide-10.png slide-11.png and so on, how can I force convert to use double numbers like 01 02 03 and so forth or is there a better way to loop through them, this is the code I have at the moment def convert_keynote_to_slides system('convert -size 640x300 ' + keynote.queued_for_write[:original].path + ' ~/rails/arcticfox/public/system/keynotes/slides/'+File.basename( self.keynote_file_name )+'0%d.png') slide_basename = File.basename( self.keynote_file_name ) files = Dir.entries('/Users/joshcrowder/rails/arcticfox/public/system/keynotes/slides') for file in files #puts file if file.include?(slide_basename +'-') self.slides.build("slide" => "#{file}") if file.include?(slide_basename) end end

    Read the article

  • How do I include full PartCover results with TeamCity 5?

    - by Jim Geurts
    Hi, I'm trying to get PartCover reports to generate correctly in TeamCity 5.0. When I click the Code Coverage tab in the build details, the reports are empty. I'm using the sln2008 build agent and my PartCoverage settings are as follows: Include Patterns: [*]* Report XSLT: C:\Program Files\PartCover .NET 2.3\xslt\Report By Assembly.xslt=>ByAssembly.html C:\Program Files\PartCover .NET 2.3\xslt\Report By Class.xslt=>ByClass.html Bonus points if you can describe how to include those reports (or just the important parts) with the email that TeamCity sends for successful/failed builds. I would like to continue using the sln2008 build agent, if possible, and not a different build agent.

    Read the article

  • File version maintainence via PHP FTP?

    - by Michael
    Currently I'm working on a "plugin" that will be installed on many different sites and I was wondering on the best way for me to maintain the file version of this "plugin". Here's what I was thinking. Have a "master copy" of the plugin on a server, then connect via FTP to the target sites and upload the copy to their site overwriting whatever files they may have. I was wondering the best way to go about this. The "plugin" will have many different folders and files so transferring one file at a time will be too tedious. Is there a way to copy an entire folder over at a time? Or even better, is there a way to recurse through the folders and checking for file difference before uploading the new file? This is to make sure we are uploading a new file and not just the same one.

    Read the article

  • Minifying Javascript on Visual Studio 2010 on release mode

    - by Arturo Molina
    I have a ASP.NET MVC 2 project on Visual Studio 2010. I want to be able to use my plain javascript files in debug mode so I can understand what's going on when debugging, but I want to used a minified/compressed version when using release mode. I was planning to create some extenders to include the js files in each page, something like: <%: Html.IncludeJS("/Content/foo.js") % In that extender method I would determine whether I am on debug or release mode and pick the appropiate JS file. The disadvantage here is that I would end up manually compressing/minifying the JS every time I change something. Is there an automated way to compress/minify and include the JS file when compiling in release mode?

    Read the article

  • Looking for Eclipse plugin: Lock source tabs in place

    - by Fredrik
    When working with java code in Eclipse I can typically juggle between 20-40 different files, but there usually are just two that I actively work with at the time (test and code). Going back to the code where I want to work after having debugged through 5-10 classes can be a pain. So what I would like to do is to be able to right click on the tab of a class and choose to lock them in place. The classes would then always be available as the first and second tabs furthest to the left in the editor. All other classes (and other files opened in the editor) would then fight over the remaining space in the editor like today. Is there a plugin like this? My google-fu might not be strong enough, cause I find nothing.

    Read the article

  • Easy way to Populate a Dictionary<string,List<string>>

    - by zion
    Greetings Guru's, my objective is to create a Dictionary of Lists, does a simpler technique exist? I prefer the List(t) to IEnumerable(t) which is why I chose the Dictionary of Lists over Ilookup or IGrouping. The code works but it seems like a messy way of doing things. string[] files = Directory.GetFiles (@"C:\test"); Dictionary<string,List<string>> DataX = new Dictionary<string,List<string>>(); foreach (var group in files.GroupBy (file => Path.GetExtension (file))) { DataX.Add (group.Key, group.ToList()); }

    Read the article

  • How to Publish InfoPath (which is fulltrusted having codebehid code ) in sharePoint?

    - by JanardhanReddy
    Hi all, I created one InfoPath form which is having C# code and i gave security option is 'full trusted' to access infopath object model,and it should be open with Browser.finally i published the Infopath form to SharePoint(by using admin-approved) site. But when i'am trying to open, it is not opening and giving an error that is 'InfoPath can not create a new or blank form InfoPath can not open the form,To fix this problem,Contact your System administrator' and in error show details its giving following message. 'The form template is trying to access files and settings on your computer. InfoPath cannot grant access to these files and settings because the form template is not fully trusted. For a form to run with full trust, it must be installed or digitally signed with a certificate'. please give me a solution.

    Read the article

  • How to structure Python package that contains Cython code

    - by Craig McQueen
    I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it. For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package. But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed). How should I structure the files in the package to cater for both these scenarios? The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.

    Read the article

  • Non-existent file in limbo prevents push to remote branch (Bazaar VCS)

    - by das_weezul
    Hi! I use Bazaar VCS to version files locally on my notebook. When im in the office I merge the changes to a repository on a windows share and also push all the files there (for backup reasons). My Problem: The last push resulted in an error, because I added a file with a very long filename (I had that problem before ... python doesn't like long filenames). So I removed the file (I didn't need it anyway) and forgot about the problem for a while, because commiting still worked fine. The next time I wanted to push my new revision I got a new error: bzr: ERROR: [Error 3] Das System kann den angegebenen Pfad nicht finden: u'//path/to/remote/branch/.bzr/checkout/limbo/new-8/loooooooongfilename.xls' translation: bzr: ERROR: [Error 3] The system can't find the following path: What I've tried: Deleting the limbo folder-- limbo folder doesn't exist Create the missing path with a dummy-file -- bazaar locks the branch -- unlock -- same problem as before bzr check -- Everything is fine -- No success bzr reconcile -- No success Thanks for reading ;o)

    Read the article

  • backing up a remote vps?

    - by ajsie
    i am using a vps at a hosting company. i can access it through ssh and i have set up webdav. i asked them if they backup the vps and they told me they are making a backup every day. but i wonder if i should backup my important files in my vps to my local computer of mine? cause it seems unsafe to upload all my important files and then delete them from my local machine. not knowing if they are backed up properly or not. if i should make backup, how and how often should i do it? what program could i use to do this? best practices i should know about? thanks!

    Read the article

  • Single file changed: intrusion or corruption?

    - by Michaël Witrant
    rkhunter reported a single file change on a virtual server (netstat binary). It didn't report any other warning. The change was not the result of a package upgrade (I reinstalled it and the checksum is back as it was before). I'm wondering whether this is a file corruption or an intrusion. I guess an intrusion would have changed many other files watched by rkhunter (or none if the intruder had access to rkhunter's database). I disassembled both binaries with objdump -d and stored the diff here: https://gist.github.com/3972886 The full dump diff generated with objdump -s is here : https://gist.github.com/3972937 I guess a file corruption would have changed either large blocks or single bits, not small blocks like this. Do these changes look suspicious? How could I investigate more? The system is running Debian Squeeze.

    Read the article

  • Copy Structure To Another Program

    - by Steven
    Long story, long: I am adding a web interface (ASPX.NET: VB) to a data acquisition system developed with LabVIEW which outputs raw data files. These raw data files are the binary representation of a LabVIEW cluster (essentially a structure). LabVIEW provides functions to instantiate a class or structure or call a method defined in a .NET DLL file. I plan to create a DLL file containing a structure definition and a class with methods to transfer the structure. When the webpage requests data, it would call a LabVIEW executable with a filename parameter. The LabVIEW code would instantiate the structure, populate the structure from the data file, then call the method to transfer the data back to the website. Long story, short: How do you recommend I transfer (copy) an instance of a structure from one .NET program to a VB.NET program? Ideas considered: sockets, temp file, xml file, config file, web services, CSV, some type of serialization, shared memory

    Read the article

  • What is ltmain.sh, and why does automake say it is missing? What is a good auto (make/conf/etc) gene

    - by gersh
    I just want to develop a C app in linux with the auto(make/conf/...) stuff automatically generated. I tried generating it with ede and anjuta, but it doesn't seem to generate Makefile.am. So, I tried running automake, and it says "ltmain.sh" isn't found. Is there some easy to generate the basic build files for linux C/C++ apps. What is the standard practice? Do most people write these files themselves?

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • PHP/GnuPG Decryption -- Syntax error?

    - by NeedBeerStat
    I'm using php to invoke gpg, but I'm getting a pipe error. I thought that if I read in the password from a file, I could then pipe it to the command itself? But, I keep getting: Syntax error: "|" unexpected Here's the code: (Note: The files are being iterated over in a foreach loop...) foreach($files as $k => $v) { $encrypted = $v; $filename = explode('.',$v); $decrypted = $filename[0].'.txt'; shell_exec("echo $passphrase | gpg --no-tty --passphrase-fd 0 -o $decrypted -d $encrypted"); }

    Read the article

  • Why am I getting 404 in Django?

    - by alex
    After installing this python Django module: http://code.google.com/p/django-compress/wiki/Installation I am getting 404's in my media static files. This Django module is supposed to "compress" the javascript/css files. That's why I'm getting 404 I guess. The problem is, I don't want this anymore. And when I installed this program, I did "python setup.py install" How do I install it? I just want to revert it back to normal so I don't get any 404 errors.

    Read the article

  • Indy 10 FTP empty list

    - by Lobuno
    Hello! I have been receiving reports from some of my users that, when using idFTP.List() from some servers (MS FTP) then the listing is received as empty (no files) when in reality there are (non-hidden) files on the current directory. May this be a case of a missing parser? The funny think, when I use the program to get the list from MY server (MSFTP on W2003) everything seems OK but on some servers I've been hitting this problem. Using latest Indy10 on D2010. Any idea?

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • setIncludesSubentities: in an NSFetchRequest is broken for entities across multiple persistent store

    - by SG
    Prior art which doesn't quite address this: http://stackoverflow.com/questions/1774359/core-data-migration-error-message-model-does-not-contain-configuration-xyz I have narrowed this down to a specific issue. It takes a minute to set up, though; please bear with me. The gist of the issue is that a persistentStoreCoordinator (apparently) cannot preserve the part of an object graph where a managedObject is marked as a subentity of another when they are stored in different files. Here goes... 1) I have 2 xcdatamodel files, each containing a single entity. In runtime, when the managed object model is constructed, I manually define one entity as subentity of another using setSubentities:. This is because defining subentities across multiple files in the editor is not supported yet. I then return the complete model with modelByMergingModels. //Works! [mainEntity setSubentities:canvasEntities]; NSLog(@"confirm %@ is super for %@", [[[canvasEntities lastObject] superentity] name], [[canvasEntities lastObject] name]); //Output: "confirm Note is super for Browser" 2) I have modified the persistentStoreCoordinator method so that it sets a different store for each entity. Technically, it uses configurations, and each entity has one and only one configuration defined. //Also works! for ( NSString *configName in [[HACanvasPluginManager shared].registeredCanvasTypes valueForKey:@"viewControllerClassName"] ) { storeUrl = [NSURL fileURLWithPath:[[self applicationDocumentsDirectory] stringByAppendingPathComponent:[configName stringByAppendingPathExtension:@"sqlite"]]]; //NSLog(@"entities for configuration '%@': %@", configName, [[[self managedObjectModel] entitiesForConfiguration:configName] valueForKey:@"name"]); //Output: "entities for configuration 'HATextCanvasController': (Note)" //Output: "entities for configuration 'HAWebCanvasController': (Browser)" if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:configName URL:storeUrl options:options error:&error]) //etc 3) I have a fetchRequest set for the parent entity, with setIncludesSubentities: and setAffectedStores: just to be sure we get both 1) and 2) covered. When inserting objects of either entity, they both are added to the context and they both are fetched by the fetchedResultsController and displayed in the tableView as expected. // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; [fetchRequest setIncludesSubentities:YES]; //NECESSARY to fetch all canvas types [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setFetchBatchSize:20]; // Set the batch size to a suitable number. [fetchRequest setAffectedStores:[[managedObjectContext persistentStoreCoordinator] persistentStores]]; [fetchRequest setReturnsObjectsAsFaults:NO]; Here is where it starts misbehaving: after closing and relaunching the app, ONLY THE PARENT ENTITY is fetched. If I change the entity of the request using setEntity: to the entity for 'Note', all notes are fetched. If I change it to the entity for 'Browser', all the browsers are fetched. Let me reiterate that during the run in which an object is first inserted into the context, it will appear in the list. It is only after save and relaunch that a fetch request fails to traverse the hierarchy. Therefore, I can only conclude that it is the storage of the inheritance that is the problem. Let's recap why: - Both entities can be created, inserted into the context, and viewed, so the model is working - Both entities can be fetched with a single request, so the inheritance is working - I can confirm that the files are being stored separately and objects are going into their appropriate stores, so saving is working - Launching the app with either entity set for the request works, so retrieval from the store is working - This also means that traversing different stores with the request is working - By using a single store instead of multiple, the problem goes away completely, so creating, storing, fetching, viewing etc is working correctly. This leaves only one culprit (to my mind): the inheritance I'm setting with setSubentities: is effective only for objects creating during the session. Either objects/entities are being stored stripped of the inheritance info, or entity inheritance as defined programmatically only applies to new instances, or both. Either of these is unacceptable. Either it's a bug or I am way, way off course. I have been at this every which way for two days; any insight is greatly appreciated. The current workaround - just using a single store - works completely, except it won't be future-proof in the event that I remove one of the models from the app etc. It also boggles the mind because I can't see why you would have all this infrastructure for storing across multiple stores and for setting affected stores in fetch requests if it by core definition (of setSubentities:) doesn't work.

    Read the article

  • PHP - a different open_basedir per each virtual host

    - by Lopoc
    Hi I've came across on this problem, I have a sever running apache and php. We have many virtual hosts but we've noticed that a potentially malicious user could use his web space to browse other user's files(via a simple php script) and even system files, this could happens due to the php permissions. A way to avoid it is to set the open_basedir var in php.ini, yhis is very simple in a single host system, but in case of virtual hosts there would be a basebir per each host. Ho can I set dis basedir per each user/host? is there a way to let apache hereditate php privileges of the php file that has been requested E.G. /home/X_USER/index.php has as owner X_USER, when apache read the file index.php it checks its path and owner, simply I'm looking for a system set php basedir variable to that path. Thank in advance Lopoc

    Read the article

  • How to setup .htaccess to rewrite to different folder

    - by Guy
    I'm moving my site to a new host but I need to have my current server continue to handle requests (not all files can be moved to the new server). So I added a parked domain to my old server (old.mydomain.com) and I want all requests to it to be written to the files from the old site. My old site (mydomain.com) was hosted internally in a folder (/public_html/mydomain/) and I want all requests to old.mydomain.com to be rewritten to the same folder. So if mydomain.com/blog was internally at /public_html/mydomain/blog I now want old.mydomain.com/blog also to reach /public_html/mydomain/blog. Here is the .htaccess that I'm trying to use: RewriteCond %{HTTP_HOST} ^old\.mydomain\.com/* RewriteRule ^(.*)$ mydomain/$1 [NC,L] but for some reason as soon as I add the $1 in the rewrite rule I get an internal error. Any ideas?

    Read the article

  • Git How do I Push a project, that was Downloaded from Source

    - by JZ
    I worked with a graphic designer that did not clone from my github account. He downloaded the project from source rather than using the command "git clone". Since he pulled his files, a month has gone by and I want to do the following tasks: Create a new branch Push the graphic designers project into that branch Merge his branch with Master I've tried the following the github forking guide with not much luck; when I attempt to push the files into a new branch I get an error: fatal: Not a git repository (or any of the parent directories): .git How do I do this?

    Read the article

  • Best Design pattern for social media file transfer

    - by Onema
    Our system would like our clients to link their accounts with different social media sites like youtube, vimeo, facebook, myspace and so on. One of the benefits we would like to give to the user is to transfer, update and delete files they have uploaded to our sites and transfer them to the social media sites mentioned above. this files could be videos, images or audio. We started thinking about using a strategy pattern, as all of these sites share a common process ( authentication, connection, use the API to transfer/edit/delete the file ), but we soon realized that it may not work as me may want to use some of the extended functionality that is specific to each service (eg: associate a youtube video with a channel, or upload images to a specific album on facebook, and much, much more...) My question is, what would be the best Structural Design Patter to use for this scenario?

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >