Search Results

Search found 7046 results on 282 pages for 'component downloads'.

Page 229/282 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • HTTP Handler error when downloading files - SSL

    - by Chiefy
    Ok big problem as this is affecting two projects on our new server. We have a file that is downloaded by users, the files are downloaded using a HTTPHandler. Since moving the site to the server and setting SSL the downloads have stopped working and we get an error message "Unable to download DownloadDocument.ashx" from site". DownloadDocument.ashx is the handler page that is set in the web.config and the button that goes there is a hyperlink with the id of the document as a querystring. Ive read the article on http://support.microsoft.com/kb/316431 and read a few other requests on this site but nothing seems to be working. This problem only happens in IE and works fine when I run it on the server in http instead of https. public override void HandleRequest(HttpContext context) { Guid guid = new Guid(context.Request.QueryString["ID"]); DataTable dt = Documents.GetDocument(guid); if (dt != null) { context.Response.Cache.SetCacheability(HttpCacheability.Private); context.Response.AddHeader("content-disposition", string.Format("attachment; filename={0}", dt.Rows[0]["DocumentName"].ToString())); context.Response.AddHeader("Content-Transfer-Encoding", "binary"); context.Response.AddHeader("Content-Length", ((byte[])dt.Rows[0]["Document"]).Length.ToString()); context.Response.ContentType = string.Format("application/{0}", dt.Rows[0]["Extension"].ToString().Remove(0, 1)); context.Response.Buffer = true; context.Response.BinaryWrite((byte[])dt.Rows[0]["Document"]); context.Response.Flush(); context.Response.End(); } } The above is my current code for the request. Ive used the base handler on http://haacked.com/archive/2005/03/17/AnAbstractBoilerplateHttpHandler.aspx. Any ideas on what this might be and how we can fix it. Thanks in advance for all responses.

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • Loading specific icon size into TIcon from stream (Delphi XE)

    - by moodforaday
    My application downloads and displays favicons for specific websites. I followed Bing's solution for detecting image format from stream, but have hit another snag. Assuming an actual icon image, the code goes like this: var icon : TIcon; begin icon := TIcon.Create; try icon.LoadFromStream( faviconStream ); spFavicon.Glyph.Assign( icon ); finally icon.Free; end; end; (spFavicon is TRzGlyphStatus from Raize Components. Its Glyph property is a TBitmap) Now, this works, but sometimes the downloaded icon contains multiple images in different sizes, e.g. 32x32 in addition to the expected 16x16. For some reason the control's Glyph property picks the larger size. How can I load only the 16x16 size into TIcon, or from TIcon into TBitmap? Test favicon: http://www.kpfa.org/favicon.ico On edit: If at all possible, I'd rather avoid saving the icon to a file first.

    Read the article

  • Are there Windows API binaries for Subversion or do I have to build SVN to call the API from Windows

    - by JeffH
    I want to call a Subversion API from a Visual Studio 2003 C++ project. I know there are threads here, here, here, and here that tell how to get started with C#.NET on Windows (the consensus seems to be SharpSvn, which I've used easily and successfully on another project) but that's not what I want. I've read the chapter on using APIs in the red-bean book which says: Subversion is primarily a set of C libraries, with header (.h) files that live in the subversion/include directory of the source tree. These headers are copied into your system locations (e.g., /usr/local/include) when you build and install Subversion itself from source. These headers represent the entirety of the functions and types meant to be accessible by users of the Subversion libraries. I'd like to use CollabNet Subversion but there doesn't seem to be API binary downloads, and I'd just as soon not build the whole thing if I can avoid it. Considering another approach, I found RapidSVN's C++ API, but it doesn't appear to offer Windows API binaries either and seems to require building SVN (which I would be willing to do as a last choice if RapidSVN's API is higher-level than the stock SVN offering.) Does calling the API from C++ in Windows have to be this much more work compared to using SharpSvn under .NET, or is there something I haven't found that would help me achieve my goal?

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • strange memory error when deleting object from Core Data

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements from the xml file. This method looks like this: -(void) emptyDataContext { NSMutableArray* mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Condition" :@"road" :NO :managedObjectContext]; NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the preceeding as it did the first time. All is well until its time to delete the core data objects again. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. See this image for the debugger window as it appears when walking through the deletion FOR loop on the second iteration. Conditions is the fetched array of 7 objects with the objects below. Condition should be an individual condition. link text As you can see 'condition' does not match any of the objects in the 'conditions' array. I'm sure this is why I'm getting the memory access errors. Just not sure why this fetch (or the FOR) is returning a wrong reference. All the code that successfully performes this function on the first iteration is used in the second but with very different results. Thanks in advance for the help!

    Read the article

  • Handling Application Logic in Multiple AsyncTask onPostExecute()s

    - by stormin986
    I have three simultaneous instances of an AsyncTask for download three files. When two particular ones finish, at the end of onPostExecute() I check a flag set by each, and if both are true, I call startActivity() for the next Activity. I am currently seeing the activity called twice, or something that resembles this type of behavior. Since the screen does that 'swipe left' kind of transition to the next activity, it sometimes does it twice (and when I hit back, it goes back to the same activity). It's obvious two versions of the activity that SHOULD only get called once are being put on the Activity stack. The only way I can find that this is possible is if both AsyncTasks' onPostExecute() executed SO simultaneously that they were virtually running the same lines at the same time, since I set the 'itemXdownloaded' flag to true right before I check for both and call startActivity(). But this is happening enough that it's very hard for me to believe that both downloads are finishing precisely at the same time and having their onPostExecute()s so close together... Any thoughts on what could be going on here? General gist of code (details removed, ignore any syntactical errors I may have edited in): // In onPostExecute() switch (downloadID) { case DL1: dl1complete = true; break; case DL2: dl2complete = true; break; case DL3: dl3complete = true; break; } // If 1 and 2 are done, move on (DL3 still going in background) if ( downloadID != DL3 && dl1complete && dl2complete) { ParentClass.this.startActivity(new Intent(ParentClass.this, NextActivity.class)); }

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • How do you fix loading plugins in eclipse 3.5.1 on linux?

    - by Jay R.
    I have two linux boxes. Both Fedora 11 x64. On one, I downloaded the eclipse-java-galileo-SR1-linux-gtk-x86_64.tar.gz. I unpacked it to /opt/eclipse-3.5.1/ and used the Install New Software... item to install the SVN team provider and the Polarion SVN connectors. Everything works. On the second, I copied the tar.gar for eclipse there, and then tried to follow the same steps. When I get to the install SVN team provided, eclipse downloads it and claims to install it and asks to restart. I restart and there is no SVN support. The software installer knows its there because I can't reinstall it without uninstalling it. So the questions: Why isn't the plugin/feature loading for the SVN Team Support? Is there a checkbox that I forgot about that enables the plugin? Is there a command line option that will force reload all of the features on the disk? I've tried to install other things like findbugs, but I get the same result. I have no messages in the log file indicating an exception or anything like that.

    Read the article

  • Forces to prompt download box IE

    - by Bruno Costa
    Hello, I'm having a problem with some reports in the application I'm doing manutention I've a button that does a postback to the server and do some information and then get back to the cliente and open a popup to download the report. private void grid_ItemCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { ... ClientScript.RegisterClientScriptBlock(this.GetType(), "xxx", "<script>javascript:window.location('xx.aspx?m=x','xxx','width=750,height=350,directories=no,location=no,menubar=no,scrollbars,status=no,toolbar=no,resizable=yes,left=50,top=50');</script>"); } Then in xxx.aspx I've the code: Response.ClearContent(); Response.ClearHeaders(); Response.TransmitFile(tempFileName); Response.Flush(); Response.Close(); File.Delete(tempFileName); Response.End(); This works fine if IE option Automatic prompting for file downloads is enabled. But by default this is disabled and I need to force the download box to be prompting. Can I do anything without change a lot of code? Thanks.

    Read the article

  • JavaScript: how to create a JS event that requires 2 seperate JS files to be loaded first while down

    - by Teddyk
    I want to perform asynchronous JavaScript downloads of two files that have dependencies attached to them. // asynch download of jquery and gmaps function addScript(url) { var script = document.createElement('script'); script.src = url; document.getElementsByTagName('head')[0].appendChild(script); } addScript('http://google.com/gmaps.js'); addScript('http://jquery.com/jquery.js'); // define some function dependecies function requiresJQuery() { ... } function requiresGmaps() { ... } function requiresBothJQueryGmaps() { ... } // do some work that has no dependencies on either JQuery or Google maps ... // QUESTION - Pseudo code below // now call a function that requires Gmaps to be loaded if (GmapsIsLoaded) { requiresGmaps(); } // QUESTION - Pseudo code below // then do something that requires both JQuery & Gmaps (or wait until they are loaded) if (JQueryAndGmapsIsLoaded) { requiresBothJQueryGmaps(); } Question: How can I create an event to indicate when: JQuery is loaded? Google Maps is loaded JQuery & Google Maps are both loaded?

    Read the article

  • Replace string with incremented value

    - by Andrei
    Hello, I'm trying to write a CSS parser to automatically dispatch URLs in background images to different subdomains in order to parallelize downloads. Basically, I want to replace things like url(/assets/some-background-image.png) with url(http://assets[increment].domain.com/assets/some-background-image.png) I'm using this inside a class that I eventually want to evolve into doing various CSS parsing tasks. Here are the relevant parts of the class : private function parallelizeDownloads(){ static $counter = 1; $newURL = "url(http://assets".$counter.".domain.com"; The counter needs to be reset when it reaches 4 in order to limit to 4 subdomains. if ($counter == 4) { $counter = 1; } $counter ++; return $newURL; } public function replaceURLs() { This is mostly nonsense, but I know the code I'm looking for looks somewhat like this. Note : $this-css contains the CSS string. preg_match("/url/i",$this->css,$match); foreach($match as $URL) { $newURL = self::parallelizeDownloads(); $this->css = str_replace($match, $newURL,$this->css); } }

    Read the article

  • webclient download problem!!!

    - by user472018
    Hello all, if this problem was discussed before,sorry for asking again.. I want to download an image from an url with using System.Net.WebClient class. When i try to download an image (ie. google logo).it does not occur any errors,but some images are occurring errors.I dont understand why this errors. how can i fix this problem? my Code is: WebClient client = new WebClient(); try { //Downloads the file from the given url to the given destination client.DownloadFile(urltxt.Text, filetxt.Text); return true; } catch (WebException w) { MessageBox.Show(w.ToString()); return false; } catch (System.Security.SecurityException) { MessageBox.Show("securityexeption"); return false; } catch (Exception) { MessageBox.Show("exception"); return false; } Errors are: System.Net.WebException:The underlying connection was closed:An unexpected error occurred on a recieve.--System.IO.IOException:Unable to read data from the transport connection:An existing connection was forcibly closed by the remote host.--System.Net.Sockets.SocketException:An existing connection was forcibly closed by the remote host...bla bla Thanks for your help.

    Read the article

  • How to handle building and parsing HTTP URL's / URI's / paths in Perl

    - by Robert S. Barnes
    I have a wget like script which downloads a page and then retrieves all the files linked in img tags on that page. Given the URL of the original page and the the link extracted from the img tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • Integer Extensions - 1st, 2nd, 3rd etc [closed]

    - by David Schiefer
    Possible Duplicate: NSNumberFormatter and ‘th’ ‘st’ ‘nd’ ‘rd’ (ordinal) number endings Hello, I'm building an application that downloads player ranks and displays them. So say for example, you're 3rd out of all the players, I inserted a condition that will display it as 3rd, not 3th and i did the same for 2nd and 1st. When getting to higher ranks though, such as 2883rd, it'll display 2883th (for obvious reasons) My question is, how can I get it to reformat the number to XXX1st, XXX2nd, XXX3rd etc? To show what I mean, here's how I format my number to add a "rd" if it's 3 if ([[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]] isEqualToString:@"3"]) { NSString*badge = [NSString stringWithFormat:@"%@rd",[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; NSString*scoreText = [NSString stringWithFormat:@"ROC Server Rank: %@rd",[container stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; profile.badgeValue = badge; rank.text = scoreText; } I can't do this for every number up to 2000 (there are 2000 ranks in total) - what can I do to solve this problem?

    Read the article

  • How to display video on html page in firefox

    - by Manish
    I did looked at this question, but the asker didn't got any reply. Still, I'm giving it a try. I want to embed a video file on a html page. The code works fine on IE but doesn't work on firefox. The code: <object id="WMPlay" width="640" height="480" classid="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,70" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject"> <param name="URL" value="XYZ.wma" /> <param name="AutoStart" value="false" /> <embed name="WMplay" width="640" height="480" type="application/x-mplayer2" pluginspage="http://www.microsoft.com/Windows/Downloads/Contents/Products/MediaPlayer/" src="XYZ.wma" allowchangedisplaysize="True" showcontrols="1" autostart="false" showdisplay="1" showstatusbar="1"> </embed> </object> Can someone please tell what am I missing? Or a better solution..something which is browser independent... :)

    Read the article

  • Images from url to listview

    - by Andres
    I have a listview which I show video results from YouTube. Everything works fine but one thing I noticed is that the way it works seems to be a bit slow and it might be due to my code. Are there any suggestions on how I can make this better? Maybe loading the images directly from the url instead of using a webclient? I am adding the listview items in a loop from video feeds returned from a query using the YouTube API. The piece of code which I think is slowing it down is this: Feed<Video> videoFeed = request.Get<Video>(query); int i = 0; foreach (Video entry in videoFeed.Entries) { string[] info = printVideoEntry(entry).Split(','); WebClient wc = new WebClient(); wc.DownloadFile(@"http://img.youtube.com/vi/" + info[0].ToString() + "/hqdefault.jpg", info[0].ToString() + ".jpg"); string[] row1 = { "", info[0].ToString(), info[1].ToString() }; ListViewItem item = new ListViewItem(row1, i); YoutubeList.Items.Add(item); imageListSmall.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); imageListLarge.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); } public static string printVideoEntry(Video video) { return video.VideoId + "," + video.Title; } As you can see I use a Webclient which downloads the images so then I can use them as image in my listview. It works but what I'm concerned about is speed..any suggestions? maybe a different control all together?

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • How to rename many files url escaped (%XX) to human readable form

    - by F. Hauri
    I have downloaded a lot of files in one directory, but many of them are stored with URL escaped filename, containing sign percents folowed by two hexadecimal chars, like: ls -ltr $HOME/Downloads/ -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom%20Mobile%20Unlimited%20Kurzanleitung-%282011-05-12%29.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI%20E173u-1%20HSPA%20USB%20Stick%20Quick%20Start-%28V100R001_01%2CEnglish%2CIndia-Reliance%2CC%2Ccolor%29.pdf ... All theses names match the following form whith exactly 3 parts: Name of the object -( Revision, and/or Date, useless ... ). Extension In same command, I would like to obtain unde My goal is to having one command to rename all this files to obtain: -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf I've successfully do the job in full bash with: urlunescape() { local srce="$1" done=false part1 newname ext while ! $done ;do part1="${srce%%%*}" newname="$part1\\x${srce:${#part1}+1:2}${srce:${#part1}+3}" [ "$part1" == "$srce" ] && done=true || srce="$newname" done newname="$(echo -e $srce)" ext=${newname##*.} newname="${newname%-(*}" echo ${newname// /_}.$ext } for file in *;do mv -i "$file" "$(urlunescape "$file")" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf or using sed, tr, bash ... and sed: for file in *;do echo -e $( echo $file | sed 's/%\(..\)/\\x\1/g' ) | sed 's/-(.*\.\([^\.]*\)$/.\1/' | tr \ \\n _\\0 | xargs -0 mv -i "$file" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf But, I'm sure, there must exist simplier and/or shorter way to do this.

    Read the article

  • Installing Plugins from Cloud p2 repository in Eclipse IDE

    - by user1495036
    I have been recently reading a lot about p2 for a requirement of mine. Most of the p2 documentation online points to p2 for RCP. My requirement is for a plugin repo. I have a plugin that is used within Eclipse IDE. I dnt want to change the repo location but based on the Eclipse Version, if the user looks for Install New Software or Check for Updates it needs to download the respective plugins. My repo currently contains all the plugins for all the versions. but i need to everytime give a different URL to my user based on the Version. For e.g i am using Eclipse 3.7(Indigo). I install the plugin thru Install New Software by adding the p2 Repo URL. Now the user decides to for some requirement move to Eclipse 3.6, I want him to connect to the same p2 Repo URL and download the plugins created for Eclipse 3.6. This is definitely possible using p2 Discovery, or i could categorize the downloads using composite repository but i dnt want to do any of these. Just want to kno is there any API that i can hold on to, so that before processing the URL and finding the updates, i can check the version of Eclipse and redirect it based on the version to an internal URL. This is possible in RCP, want to kno if i can do it in Eclipse p2 UI. All the p2 UI looks to be internal classes. Any directives would be appreciated. Malai

    Read the article

  • How to provide i18n service for developer and end user

    - by user247245
    Many android applications have quite poor i18n-support, and for an understandable reason, as it adds much work for the developer. From a both intuitive and cultural point of view it would be a good thing if end-users could translate the apps themself, and OTA share the translation, without reinstalling the app itself. In concept; as wikipedia, some add content easily, others only use what's there. It's of course important that the service is as easy as possible to use, both for app-developers, and people willing to transcribe. To keep it simple, this is the solution I'm concidering; Developer perspective: Developer uses a customized setContentView when open activities/layouts that will seach for thanslations of xml-entries. (below) The customized version is provided as a free downloadable library/class..., turning the i18n feature to more or less a one liner. User perspective: User downloads app without any translation As app launches, it checks locale running at phone, and will look for a translated xml-file at shared space in SD. If no or old transcribed xml (above), try to download new from internet-service (ansync). This is all done by library above, no need for intents. Translator perspective: Separate app to provide translations for any app using the i18n service above. (Could be just a webapp), with some form of QA on translators/input. QUESTION: Now, for this to work efficiently, it has to be AeasyAP for the developer to even bother, and the most fluent solution would be a customized version of setContentView, that simply loads the translated values from external xml, instead of the ones in the apk. Is this possible at all, and if not, what's your suggested solutions? (And of course, Happy New Year, feliz ano novo, blwyddyn newydd dda, Gott Nytt År, kontan ane nouvo, szczesliwego nowego roku ...) Regards, /T

    Read the article

  • Updating iOS application content which include images

    - by azamsharp
    I am working on a Vegetable gardening application. Apart from the vegetable name and description I also have vegetable image. Currently, I have all the images in the Supported Files folder in the Xcode project. But later on I want to update the application dynamically without having the user download a new version. When the user updates the application or downloads new data from the server that data will include the images. Can I store those images in the supporting file folder or somewhere where they can be references by just the name. RELATED QUESTION: I will also allow the user to take pictures of their vegetables and then write notes about the vegetables like "just planted", "about to harvest" etc. What is the recommended approach for storing pictures/photos. I can always store them in the user's photo library and then store the reference in the local database and then fetch and display the picture using the reference. The problem with that approach might be that if the user accidentally deletes the picture from the library then it will no longer be displayed in my application. The only way I see if to store the picture in the app local database as a BLOB.

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • How-to diagnose and fix such "on-site" crash of dotnet application?

    - by Dmitriy Matveev
    Hello! I'm working on some application which has auto-update function. The implemented idea is simple as following: - There are some "starter" application which is installed to "Program Files/whatever/...". It's the application which is intended to be started by user. - Each time the "starter" application is executed it checks server for updates and downloads it to "%APPDATA%/some/...". And then it starts some application from that folder. Above approach is working on my development machine (running Vista) and on some other machines under XP, but under some different machine (running Windows 7) it isn't working. When "starter" executes the real application it crashes with some unknown problem (Signature = System.UnauthorizedAccess). When real application is executed manually from %APPDATA%/some/ folder then everything is working fine. I've tried to set same working directory in ProcessStartInfo, so "starter" will also execute real application in that folder, but this isn't helped me. How can I diagnose and/or fix that issue?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >