Search Results

Search found 12625 results on 505 pages for 'ajax upload'.

Page 241/505 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • NAS device for distributed team

    - by user5959
    We are a distributed team spread across 5 locations. We have a shared drive (1 TB data) at our former location that we are currently accessing via Hamachi VPN. Our shared drive is a network folder on a Windows Server located at one of our locations. The current connection speed is terrible. The upload speed at the current location of the shared drive is very slow. We looking for a NAS device that we can host at another location with better upload speed that all of us can access. I am looking for a NAS device that has these features: Minimal Maintenance as we do not have dedicated IT resources Access data on the device from multiple locations. Ability to create network drive (On Windows Computers Map Network Drive) Upload data from random client computers without having to install software. (Right now, we use LogMeIn Rescue's file manager) Ability handle slow or dropped connections when transferring files (Maximum size 1.5 GB)

    Read the article

  • Is there a way to set a handler function for when a set of events has happened in JavaScript?

    - by allyourcode
    eg I have two concurrent AJAX requests, and I need the result from both to compute a third result. I'm using the Prototype library, so it might look something like this: var r1 = new Ajax.Request(url1, ...); var r2 = new Ajax.Request(url2, ...); function on_both_requests_complete(resp1, resp2) { ... } One way would be to use polling, but I'm thinking there must be a better way.

    Read the article

  • jQuery toggle() with unknown initial state

    - by Jason Morhardt
    I have a project that I am working on that uses a little image to mark a record as a favorite on multiple rows in a table. The data gets pulled from a DB and the image is based on whether or not that item is a favorite. One image for a favorite, a different image if not a favorite. I want the user to be able to toggle the image and make it a favorite or not. Here's my code: $(function () { $('.FavoriteToggle').toggle( function () { $(this).find("img").attr({src:"../../images/icons/favorite.png"}); var ListText = $(this).find('.FavoriteToggleIcon').attr("title"); var ListID = ListText.match(/\d+/); $.ajax({ url: "include/AJAX.inc.php", type: "GET", data: "action=favorite&ItemType=0&ItemID=" + ListID, success: function () {} }); }, function () { $(this).find("img").attr({src:"../../images/icons/favorite_not.png"}); var ListText = $(this).find('.FavoriteToggleIcon').attr("title"); var ListID = ListText.match(/\d+/); $.ajax({ url: "include/AJAX.inc.php", type: "GET", data: "action=favorite&ItemType=0&ItemID=" + ListID, success: function () {} }); } ); }); Works great if the initial state is not a favorite. But you have to double click to get the image to change if it IS a favorite initially. This causes the AJAX to fire twice and essentially make it a favorite then not a favorite before the image responds. The user thinks he's made it a favorite because the image changed, but in fact, it's not. Help anybody?

    Read the article

  • Using jQuery, how do you mimic the form serialization for a select with multiple options selected in

    - by CarolinaJay65
    Below is my $.ajax call, how do I put a selects (multiple) selected values in the data section? $.ajax({ type: "post", url: "http://myServer" , dataType: "text", data: { 'service' : 'myService', 'program' : 'myProgram', 'start' : start, 'end' : end , }, success: function(request) { result.innerHTML = request ; } // End success }); // End ajax method EDIT I should have included that I understand how to loop through the selects selected options with this code: $('#userid option').each(function(i) { if (this.selected == true) { but how do I fit that into my data: section?

    Read the article

  • How can I tell when an FTP is complete?

    - by identry
    I have a cron job that processes files that my client's upload via FTP to my FreeBSD server. The cron job runs once an hour, and normally processing each file only takes a few seconds. The cron job looks in the client's upload directory and moves any new files to a tmp directory. It then processes the file(s) and moves them into a final directory where they are then available to the public through a website. The problem is, every once in awhile, the cron job runs just as a new file is being uploaded. It moves the half-uploaded file to the tmp directory, and tries to process it, and fails, of course. Question: how can I determine if the uploaded file is complete? The only thing I can think of is checking the file size to see if it's changing, but that seems like a kludge. Is there some sort of flag or something that is set when the upload is complete?

    Read the article

  • Serializing to JSON in jQuery

    - by Herb Caudill
    I know how to serialize an object to JSON in ASP.NET Ajax, but I'm trying to do things on the client in a less Microsoft-specific way. I'm using jQuery. Is there a "standard" way to do this? My specific situation: I have an array defined something like this: var countries = new Array(); countries[0] = 'ga'; countries[1] = 'cd'; ... and I need to turn this into a string to pass to $.ajax() like this: $.ajax({ type: "POST", url: "Concessions.aspx/GetConcessions", data: "{'countries':['ga','cd']}", ... Edit (clarification) I realize there are a number of JSON libraries out there, but I'd like to avoid introducing a new dependency (if I'm going to do that, I might as well use ASP.NET Ajax's built-in JSON serializer).

    Read the article

  • [jquery] Different function for same class on 'click' / 'dblclick'

    - by Shishant
    Hello, This are my two functions, on single click it works fine, but on dblclick both functions execute, any idea? I tried using live instead of delegate but still both functions execute on dblclick // Change Status on click $(".todoBox").delegate("li", "click", function() { var id = $(this).attr("id"); $.ajax({ //ajax stuff }); return false; }); // Double Click to Delete $(".todoBox").delegate("li", "dblclick", function(){ var id = $(this).attr("id"); $.ajax({ //ajax stuff }); return false; });

    Read the article

  • Metro: Promises

    - by Stephen.Walther
    The goal of this blog entry is to describe the Promise class in the WinJS library. You can use promises whenever you need to perform an asynchronous operation such as retrieving data from a remote website or a file from the file system. Promises are used extensively in the WinJS library. Asynchronous Programming Some code executes immediately, some code requires time to complete or might never complete at all. For example, retrieving the value of a local variable is an immediate operation. Retrieving data from a remote website takes longer or might not complete at all. When an operation might take a long time to complete, you should write your code so that it executes asynchronously. Instead of waiting for an operation to complete, you should start the operation and then do something else until you receive a signal that the operation is complete. An analogy. Some telephone customer service lines require you to wait on hold – listening to really bad music – until a customer service representative is available. This is synchronous programming and very wasteful of your time. Some newer customer service lines enable you to enter your telephone number so the customer service representative can call you back when a customer representative becomes available. This approach is much less wasteful of your time because you can do useful things while waiting for the callback. There are several patterns that you can use to write code which executes asynchronously. The most popular pattern in JavaScript is the callback pattern. When you call a function which might take a long time to return a result, you pass a callback function to the function. For example, the following code (which uses jQuery) includes a function named getFlickrPhotos which returns photos from the Flickr website which match a set of tags (such as “dog” and “funny”): function getFlickrPhotos(tags, callback) { $.getJSON( "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?", { tags: tags, tagmode: "all", format: "json" }, function (data) { if (callback) { callback(data.items); } } ); } getFlickrPhotos("funny, dogs", function(data) { $.each(data, function(index, item) { console.log(item); }); }); The getFlickr() function includes a callback parameter. When you call the getFlickr() function, you pass a function to the callback parameter which gets executed when the getFlicker() function finishes retrieving the list of photos from the Flickr web service. In the code above, the callback function simply iterates through the results and writes each result to the console. Using callbacks is a natural way to perform asynchronous programming with JavaScript. Instead of waiting for an operation to complete, sitting there and listening to really bad music, you can get a callback when the operation is complete. Using Promises The CommonJS website defines a promise like this (http://wiki.commonjs.org/wiki/Promises): “Promises provide a well-defined interface for interacting with an object that represents the result of an action that is performed asynchronously, and may or may not be finished at any given point in time. By utilizing a standard interface, different components can return promises for asynchronous actions and consumers can utilize the promises in a predictable manner.” A promise provides a standard pattern for specifying callbacks. In the WinJS library, when you create a promise, you can specify three callbacks: a complete callback, a failure callback, and a progress callback. Promises are used extensively in the WinJS library. The methods in the animation library, the control library, and the binding library all use promises. For example, the xhr() method included in the WinJS base library returns a promise. The xhr() method wraps calls to the standard XmlHttpRequest object in a promise. The following code illustrates how you can use the xhr() method to perform an Ajax request which retrieves a file named Photos.txt: var options = { url: "/data/photos.txt" }; WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); var data = JSON.parse(xmlHttpRequest.responseText); console.log(data); }, function(xmlHttpRequest) { console.log("fail"); }, function(xmlHttpRequest) { console.log("progress"); } ) The WinJS.xhr() method returns a promise. The Promise class includes a then() method which accepts three callback functions: a complete callback, an error callback, and a progress callback: Promise.then(completeCallback, errorCallback, progressCallback) In the code above, three anonymous functions are passed to the then() method. The three callbacks simply write a message to the JavaScript Console. The complete callback also dumps all of the data retrieved from the photos.txt file. Creating Promises You can create your own promises by creating a new instance of the Promise class. The constructor for the Promise class requires a function which accepts three parameters: a complete, error, and progress function parameter. For example, the code below illustrates how you can create a method named wait10Seconds() which returns a promise. The progress function is called every second and the complete function is not called until 10 seconds have passed: (function () { "use strict"; var app = WinJS.Application; function wait10Seconds() { return new WinJS.Promise(function (complete, error, progress) { var seconds = 0; var intervalId = window.setInterval(function () { seconds++; progress(seconds); if (seconds > 9) { window.clearInterval(intervalId); complete(); } }, 1000); }); } app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { wait10Seconds().then( function () { console.log("complete") }, function () { console.log("error") }, function (seconds) { console.log("progress:" + seconds) } ); } } app.start(); })(); All of the work happens in the constructor function for the promise. The window.setInterval() method is used to execute code every second. Every second, the progress() callback method is called. If more than 10 seconds have passed then the complete() callback method is called and the clearInterval() method is called. When you execute the code above, you can see the output in the Visual Studio JavaScript Console. Creating a Timeout Promise In the previous section, we created a custom Promise which uses the window.setInterval() method to complete the promise after 10 seconds. We really did not need to create a custom promise because the Promise class already includes a static method for returning promises which complete after a certain interval. The code below illustrates how you can use the timeout() method. The timeout() method returns a promise which completes after a certain number of milliseconds. WinJS.Promise.timeout(3000).then( function(){console.log("complete")}, function(){console.log("error")}, function(){console.log("progress")} ); In the code above, the Promise completes after 3 seconds (3000 milliseconds). The Promise returned by the timeout() method does not support progress events. Therefore, the only message written to the console is the message “complete” after 10 seconds. Canceling Promises Some promises, but not all, support cancellation. When you cancel a promise, the promise’s error callback is executed. For example, the following code uses the WinJS.xhr() method to perform an Ajax request. However, immediately after the Ajax request is made, the request is cancelled. // Specify Ajax request options var options = { url: "/data/photos.txt" }; // Make the Ajax request var request = WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); }, function (xmlHttpRequest) { console.log("fail"); }, function (xmlHttpRequest) { console.log("progress"); } ); // Cancel the Ajax request request.cancel(); When you run the code above, the message “fail” is written to the Visual Studio JavaScript Console. Composing Promises You can build promises out of other promises. In other words, you can compose promises. There are two static methods of the Promise class which you can use to compose promises: the join() method and the any() method. When you join promises, a promise is complete when all of the joined promises are complete. When you use the any() method, a promise is complete when any of the promises complete. The following code illustrates how to use the join() method. A new promise is created out of two timeout promises. The new promise does not complete until both of the timeout promises complete: WinJS.Promise.join([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The message “complete” will not be written to the JavaScript Console until both promises passed to the join() method completes. The message won’t be written for 5 seconds (5,000 milliseconds). The any() method completes when any promise passed to the any() method completes: WinJS.Promise.any([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The code above writes the message “complete” to the JavaScript Console after 1 second (1,000 milliseconds). The message is written to the JavaScript console immediately after the first promise completes and before the second promise completes. Summary The goal of this blog entry was to describe WinJS promises. First, we discussed how promises enable you to easily write code which performs asynchronous actions. You learned how to use a promise when performing an Ajax request. Next, we discussed how you can create your own promises. You learned how to create a new promise by creating a constructor function with complete, error, and progress parameters. Finally, you learned about several advanced methods of promises. You learned how to use the timeout() method to create promises which complete after an interval of time. You also learned how to cancel promises and compose promises from other promises.

    Read the article

  • Uploading.to Uploads Files to Multiple File Hosts Simultaneously

    - by Jason Fitzpatrick
    If you’re looking to quickly share a file across a variety of file hosting services, Uploading.to makes it a cinch to share up to 10 files across 14 hosts. The upload process is simple. Visit Uploading.to, select your files, check the hosts you want to share the file across (by default all 14 are checked), add a description to the collection, and hit the Upload button. Uploading.to will upload your file to the various hosts; during the process you’ll see which hosts are confirmed and which have failed. We had 2 failures among the 14 hosts which still left the file mirrored across a sizable 12 host spread–not bad at all. When you’re ready to share the file hit the Copy Link button at the bottom of the screen and share it with your friends. They’ll be directed to Uploading.to and will be able to select from any of the hosts the file was successfully mirrored across. Uploading.to is a free service and requires no registration. Uploading.to [via Addictive Tips] HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Ubuntu One Sync as a File Backup Solution?

    - by Jeff
    I was hoping to utilize Ubuntu One and in particular, the syncing feature within Ubuntu One to provide offsite backup for some of my files. My intention was to mark any of my folders that have important files as 'folders to synchronize' to Ubuntu One. It works great in that whenever an important file is placed in the folder, the file is copied up to Ubuntu One (hence creating a backup). However, if any of these important files are lost or accidently deleted from my computer then due to the synchronization it is also immediately deleted from Ubuntu One. This approach does not work very well to provide backup. On one hand I really like the automatic way in which the synch feature will upload any of my important files to Ubuntu One but on the other hand if I lose the file on my computer it will likely be taken off of the cloud as well (via synchronization). What approach are others taking to backup their important files to Ubuntu One? I didn't want to have to manually upload my important files to Ubuntu One and remember to upload other important files as they are created on my computer. Your thoughts and suggestions are greatly appreciated.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Minimum Baseline for Extended Support are you Ready ?

    - by gadi.chen
    As you all know the Premier support for 11i was ended last year. The extended support is available, but to use it you are advised to be at a minimum patch level as describe in note: 883202.1. So how you can know if you are in the minimum patch level? So, there are few ways. Patch wizard. Contact Oracle Support. Upload me Patch wizard Easy to use, very intuitive, required installing patch 9803629. Check MOS note: 1178133.1 Oracle Support You can log an SR thru My Oracle Support a.k.a MOS. Upload me In this option you will need to run simple sql statement (attached below or you can download it from here patchinfo.sql ) via sql*plus and upload/mail me the output and I will mail you back as soon as I can a detailed report of the required patches to be installed in order to be supported. Gadi Chen Oracle Core Technology Consultant [email protected] -------------------------------- Start from Here --------------------------------- set pagesize 0 echo off feedback off trimspool on timing off col prod format a8 col patchset format a15 spool patchinfo.txt select instance_name, version from v$instance; select bug_number from ad_bugs; prompt EOS select decode(nvl(a.APPLICATION_short_name,'Not Found'), 'SQLAP','AP','SQLGL','GL','OFA','FA', 'Not Found','id '||to_char(fpi.application_id), a.APPLICATION_short_name) prod, fpi.status, nvl(fpi.patch_level,'Unknown') patchset from fnd_application a, fnd_product_installations fpi where fpi.application_id = a.application_id(+) and fpi.status != 'N' and fpi.patch_level is not null order by 2,1; spool off; exit

    Read the article

  • Issue 57 - DotNetNuke Gallery Module and OWS Skin Objects

    June 2010 Welcome to Issue 57 of DNN Creative Magazine In this issue we show you how to use the DotNetNuke Core Gallery Module. The Gallery module allows you to upload files and present them within albums. You can upload images as well as media files such as music and video files. The Gallery module has many features available such as multiple albums, bulk upload, categorization, slideshow, display templates, voting, downloads, watermark and private gallery. This is a useful module for displaying images and media within your DotNetNuke portal with options for customizing the display to suit your exact requirements. We walk you through step by step how to install, use and fully configure the DotNetNuke Gallery module. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create a Skin Object from an OWS configuration. We show you how to create a menu and a feedback form using OWS and how to display those OWS applications as Skin Objects within a DotNetNuke skin. To finish, we continue the series of articles on DotNetMushroom Rapid Application Developer (RAD), where we demonstrate some of the new features available in the latest version of DNM RAD, these include: Creating a new data source, creating a linked table, creating a direct query and the new colour coding editor. This issue comes complete with 9 videos. Core Modules: DotNetNuke Gallery Module (7 videos - 57 mins) Module Development Series: How to Create a Skin Object from an OWS Configuration (2 videos - 18 mins) New Features in DNM 01.20.00 View issue 57 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 57 issues we have created 587 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Issue While uploading a image to share point 2010 picture library.

    - by Gino Abraham
    I was trying to upload a image to my picture library using sharepoint client object model. I Used the code from the below blog to upload a file to my picture library. http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx The image got uploaded sucessfully. But when we took the relative url to update in a different list, we were getting empty image symbol. After a lot of analysis we figured out that the issue was with the file we uploaded. An image file which is of jpeg quality was uploaded to an application with giff extension. Try this. Copy a JPG file from net and save it to your file system. Change the extension of the file from jpg to giff. When you change the file extension the image quality remains same but it will open in picture viewers. Upload the file to your picture library. Once uploaded you will get the file listed as thumbail in your picture library. Click on the thumbnail image it will open up a page showing a larger image with file details. Now click either on the image or the file name hyper link, it will open up an empty page with default no image symbol. I wasted a lot of time on this figuring out the issue, so thought of sharing here. Hope this helps some one.

    Read the article

  • Managing products on a an ecommerce site [closed]

    - by John
    I've had a site that sells widgets for many years. I do not inventory my widgets, but the cost of adding them to the site and makings sure the site is current is becoming cost prohibitive. Here are the facts: I sell a single class of widget. I have about 50,000 widgets on my site. I have about 100 vendors that create and dropship the products when they get an order from me via email. Each vendor carries from 50 to 5000 types of widgets. Vendors all have websites with images and descriptions of their products. Each widget is produced in limited supply and usually sell out in 1-5 years. Prices of the widget often go up, sometimes more than 50% before they sell out. My vendors aren't very tech sophisticated. They have websites with their products, but most can't supply an api or database dump. Their websites usually display retail prices to the public, but I login or refer to a price list (usually excel) for wholesale prices. As it stands now, I hire local people to add and describe each widget to our website. It usually takes a person 4 minutes to add a widget to the site. This doesn't include moving to a new vendor. I feel like the upload/edit process is as good as it can get via a form/website. The problem is that it is getting very expensive to upload and keep the widget inventory current. I often get orders for something after it's sold out from the vendor or the price is wrong. This seems like it would be a problem in many industries. Can anyone suggest the cheapest way to upload inventory and ensure prices are current from my vendors? I'm assuming it will involve outsourcing, but I would like ideas on how to setup the compensation model.

    Read the article

  • An online version of ClearTrace

    - by Bill Graziano
    When I visit clients for the first time and conduct a performance review I introduce them to ClearTrace. It’s still the best way I know to identify exactly which queries are consuming the most resources.  The downside is that it needs to be downloaded and create a database to store the results.  I finally decided it would be easier if I could just upload a trace immediately. You can find the online version of ClearTrace at TraceTune.com.  It provides a simple way to upload a trace file and see exactly which stored procedures or SQL statements consume the most CPU and disk.   This is still a work in progress as I try to determine exactly which features from ClearTrace are important.  I’ve also limited the file upload to 10MB in this beta release.  That might not sound like much but I get over 20,000 events using this stored procedure to generate the trace. If you’re looking for something to do on a Friday, I’d suggest a little performance tuning.  Generating 10MB of trace data doesn’t take long at all and in a short time you’ll see exactly which SQL statements you need to tune first.

    Read the article

  • Best practice- handling images on website

    - by Steve
    I am porting an old eCommerce site to MVC 3 and would like to take advantage of design improvements. The site currently has product images stored in 3 sizes: thumbnail, medium (for display in a list) and expanded for a zoomed look. Right now we are having to upload 3 separate images that are sized exactly right, provide 3 different names that match what the site expects, etc., it is a pain. I'd like to upload just 1 file, the large one, then let the site reduce it to needed sizes, and I'd like the flexibility to change the thumbnail and list sizes depending on user preferences, form factor (e.g. mobile, iPad, desktop), etc. so might need many copies of the same image. My question is should the image be reduced then saved several times upon upload and if so what is a good storage/naming convention? The other idea is to store just the single image but resize it programmatically before serving it to the client. Has anybody done this and what are the tradeoffs besides a few more machine cycles? How do you pass a temporary image in memory to the client (there is no URL)?

    Read the article

  • Unable to rename file with c# ftp methods when current user directory is different from root

    - by Agata
    Hello everyone, Remark: due to spam prevention mechanizm I was forced to replace the beginning of the Uris from ftp:// to ftp. I've got following problem. I have to upload file with C# ftp method and afterwards rename it. Easy, right? :) Ok, let's say my ftp host is like this: ftp.contoso.com and after logging in, current directory is set to: users/name So, what I'm trying to achieve is to log in, upload file to current directory as file.ext.tmp and after upload is successful, rename the file to file.ext The whole difficulty is, as I guess, to properly set the request Uri for FtpWebRequest. MSDN states: The URI may be relative or absolute. If the URI is of the form "ftp://contoso.com/%2fpath" (%2f is an escaped '/'), then the URI is absolute, and the current directory is /path. If, however, the URI is of the form "ftp://contoso.com/path", first the .NET Framework logs into the FTP server (using the user name and password set by the Credentials property), then the current directory is set to UserLoginDirectory/path. Ok, so I upload file with the following URI: ftp.contoso.com/file.ext.tmp Great, the file lands where I wanted it to be: in directory "users/name" Now, I want to rename the file, so I create web request with following Uri: ftp.contoso.com/file.ext.tmp and specify rename to parameter as: file.ext and this gives me 550 error: file not found, no permissions, etc. I traced this in Microsoft Network Monitor and it gave me: Command: RNFR, Rename from CommandParameter: /file.ext.tmp Ftp: Response to Port 53724, '550 File /file.ext.tmp not found' as if it was looking for the file in the root directory - not in the current directory. I renamed the file manually using Total Commander and the only difference was that CommandParameter was without the first slash: CommandParameter: file.ext.tmp I'm able to successfully rename the file by supplying following absolute URI: ftp.contoso.com/%2fusers/%2fname/file.ext.tmp but I don't like this approach, since I would have to know the name of current user's directory. It can probably be done by using WebRequestMethods.Ftp.PrintWorkingDirectory, but it adds extra complexity (calling this method to retrieve directory name, then combining the paths to form proper URI). What I don't understand is why the URI ftp.contoso.com/file.ext.tmp is good for upload and not for rename? Am I missing something here? The project is set to .NET 4.0, coded in Visual Studio 2010.

    Read the article

  • Problem: adding feature in MOSS 2007

    - by Anoop
    Hi All, I have added a menu item in 'Actions' menu of a document library as follows(Using features: explained in MSDN How to: Add Actions to the User Interface: http://msdn.microsoft.com/en-us/library/ms473643.aspx): feature.xml as follows: <?xml version="1.0" encoding="utf-8" ?> <Feature Id="51DEF381-F853-4991-B8DC-EBFB485BACDA" Title="Import From REP" Description="This example shows how you can customize various areas inside Windows SharePoint Services." Version="1.0.0.0" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="ImportFromREP.xml" /> </ElementManifests> </Feature> ImportFromREP.xml is as follows: <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Document Library Toolbar Actions Menu Dropdown --> <CustomAction Id="ImportFromREP" RegistrationType="List" RegistrationId = "101" GroupId="ActionsMenu" Rights="AddListItems" Location="Microsoft.SharePoint.StandardMenu" Sequence="1000" Title="Import From REP"> <UrlAction Url="/_layouts/ImportFromREP.aspx?ActionsMenu"/> </CustomAction> </Elements> I have successfully installed and activated the feature. Normal case: Now if i login with the user having atleast 'Contribute' permission on the document library, i am able to see the menu item 'Import From REP' (in 'Actions' menu)in root folder as well as in all subfolders. Problem case: If user is having view rights on the document library but add/delete rights on a perticular subfolder(say 'subfolder') inside the document library: Menu item 'Import From REP' is not visible on root folder level as well as 'Upload' menu is also not visible. which is o.k. because user does not have AddListItems right on root folder but it is also not visible in 'subfolder' while 'Upload' menu is visible as user has add/delete rights on the 'subfolder'. did i mention a wrong value of attribute Rights(Rights="AddListItems") in the xml file?? If so, what should be the correct value? What should i do to simulate the behaviour of 'Upload' menu??(i.e. when 'Upload' menu is visible, 'Import From REP' menu is visible and when 'Upload' menu is not visible 'Import From REP' menu is also not visible. ) Thanks in advance Anoop

    Read the article

  • Sortable_element with RJS does not working

    - by jaycode
    I have a list of images where user can arrange their orders. When user uploaded an image, I want the list to still be sortable. I am using a similar upload that was described here: http://kpumuk.info/ruby-on-rails/in-place-file-upload-with-ruby-on-rails/ Please help. Here are the code for upload in view file: <% form_for [:admin, @new_image], :html => { :target => 'upload_frame', :multipart => true } do |f| %> <%= hidden_field_tag :update, 'product_images'%> <%= f.hidden_field :image_owner_id %> <%= f.hidden_field :image_owner_type %> <%= f.file_field :image_file %><br /> or get image from this URL: <%= f.text_field :image_file_url %> <%= f.hidden_field :image_file_temp %><br /> <%= f.submit "Upload Image" %> <% end %> And in controller view: def create @image = Image.new(params[:image]) logger.debug "params are #{params.inspect}" if @image.save logger.debug "initiating javascript now" responds_to_parent do render :update do |page| logger.debug "javascript test #{sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false)}" page << "show_notification('Image Uploaded');" page.replace_html params[:update], :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} page << sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false) end end #render :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} else responds_to_parent do render :update do |page| page << "show_notification('Image Upload Error');" end end end end Or, to rephrase the question: Running this: page.replace_html params[:update], :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} page << sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false) Will NOT adding sortable list feature. Please help, Thank you

    Read the article

  • PHP & MySQL submit error message problem

    - by peakUC
    When I submit a new name and not a new avatar I get the following avatar error message Please upload a .gif, .jpeg, .jpg or .png image!. I want to be able to send a new name only without having to upload a new avatar each time I submit the form without getting the avatar error message Please upload a .gif, .jpeg, .jpg or .png image! can someone help me fix this problem? Here is the php code. if (isset($_POST['submitted'])) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.* FROM users WHERE user_id=3"); $first_name = mysqli_real_escape_string($mysqli, htmlentities(strip_tags($_POST['first_name']))); $user_id = '3'; if(isset($_FILES["avatar"]["name"]) && $_FILES['avatar']['size'] <= 5242880) { if($_FILES["avatar"]["type"] == "image/gif" || $_FILES["avatar"]["type"] == "image/jpeg" || $_FILES["avatar"]["type"] == "image/jpg" || $_FILES["avatar"]["type"] == "image/png" || $_FILES["avatar"]["type"] == "image/pjpeg") { if (file_exists("../members/" . $user_id . "/images/" . $_FILES["avatar"]["name"])) { echo '<p class="error">' . mysqli_real_escape_string($mysqli, htmlentities(strip_tags(basename($_FILES["avatar"]["name"])))) . ' already exists! '; } else if($_FILES["avatar"]["name"] == TRUE) { move_uploaded_file($_FILES["avatar"]["tmp_name"], "../members/" . $user_id . "/images/" . mysqli_real_escape_string($mysqli, htmlentities(strip_tags(basename($_FILES["avatar"]["name"]))))); $avatar = mysqli_real_escape_string($mysqli, htmlentities(strip_tags(basename($_FILES["avatar"]["name"])))); } } else if($_FILES["avatar"]["type"] != "image/gif" || $_FILES["avatar"]["type"] != "image/jpeg" || $_FILES["avatar"]["type"] != "image/jpg" || $_FILES["avatar"]["type"] != "image/png" || $_FILES["avatar"]["type"] != "image/pjpeg") { echo '<p class="error">Please upload a .gif, .jpeg, .jpg or .png image!</p>'; } } else if($_FILES['avatar']['size'] >= 5242880) { echo '<p class="error">Please upload a smaller pic!</p>'; } else if($_FILES["avatar"]["name"] == NULL) { $avatar = NULL; } if(isset($_FILES["avatar"]["name"]) && $_FILES['avatar']['size'] <= 5242880) { if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO users (user_id, first_name, avatar) VALUES ('$user_id', '$first_name', '$avatar')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE users SET first_name = '$first_name', avatar = '$avatar' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { print mysqli_error($mysqli); return; } } }

    Read the article

  • .NET Impersonate not working!

    - by Jagd
    I have a webpage that allows a user to upload a file to a network share. When I run the webpage locally (within VS 2008) and try to upload the file, it works! However, when I deploy the website to the webserver and try to upload the file through the webpage, it doesn't work! The error being returned to me on the webserver says "Access to the path '\05prd1\emp\test.txt' is denied. So, obviously, this is a permissions issue. The network share is configured to allow full access both to me (NT authentication) and to the NETWORK SERVICE (which is .NET's default account and what we have set in our IIS application pool as the default user for this website). I have tried this with and without authentication, and neither one works on the webserver, yet both ways works on my local machine. The code that does the file upload is below. Please note that the code below includes impersonation, but like I said above, I've tried it with and without impersonation and it's made no difference. if (fuCourses.PostedFile != null && fuCourses.PostedFile.ContentLength > 0) { System.Security.Principal.WindowsImpersonationContext impCtx; impCtx = ((System.Security.Principal.WindowsIdentity)User.Identity).Impersonate(); try { lblMsg.Visible = true; // The courses file to be uploaded HttpPostedFile file = fuCourses.PostedFile; string fName = file.FileName; string uploadPath = "\\\\05prd1\\emp\\"; // Get the file name if (fName.Contains("\\")) { fName = fName.Substring( fName.LastIndexOf("\\") + 1); } // Delete the courses file if it is already on \\05prd1\emp FileInfo fi = new FileInfo(uploadPath + fName); if (fi != null && fi.Exists) { fi.Delete(); } // Open new file stream on \\05prd1\emp and read bytes into it from file upload FileStream fs = File.Create(uploadPath + fName, file.ContentLength); using (Stream stream = file.InputStream) { byte[] b = new byte[4096]; int read; while ((read = stream.Read(b, 0, b.Length)) > 0) { fs.Write(b, 0, read); } } fs.Close(); lblMsg.Text = "File Successfully Uploaded"; lblMsg.ForeColor = System.Drawing.Color.Green; } catch (Exception ex) { lblMsg.Text = ex.Message; lblMsg.ForeColor = System.Drawing.Color.Red; } finally { impCtx.Undo(); } } Any help on this would be very appreciated!

    Read the article

  • C#: My callback function gets called twice for every Sent Request

    - by Madi D.
    I've Got a program that uploads/downloads files into an online server,Has a callback to report progress and log it into a textfile, The program is built with the following structure: public void Upload(string source, string destination) { //Object containing Source and destination to pass to the threaded function KeyValuePair<string, string> file = new KeyValuePair<string, string>(source, destination); //Threading to make sure no blocking happens after calling upload Function Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.TUpload)); t.Start(file); } private void TUpload(object fileInfo) { KeyValuePair<string, string> file = (KeyValuePair<string, string>)fileInfo; /* Some Magic goes here,Checking The file and Authorizing Upload */ var ftiObject = new FtiObject () { FileNameOnHDD = file.Key, DestinationPath = file.Value, //Has more data used for calculations. }; //Threading to make sure progress gets callback gets called. Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.UploadOP)); t.Start(ftiObject); //Signal used to stop progress untill uploadCompleted is called. uploadChunkDoneSignal.WaitOne(); /* Some Extra Code */ } private void UploadOP(object ftiSentObject) { FtiObject ftiObject = (FtiObject)ftiSentObject; /* Some useless code to create the uri and prepare the ftiObject. */ // webClient.UploadFileAsync will open a thread that // will upload the file and report // progress/complete using registered callback functions. webClient.UploadFileAsync(uri, "PUT", ftiObject.FileNameOnHDD, ftiObject); } I got a callback that is registered to the Webclient's UploadProgressChanged event , however it is getting called twice per sent request. void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { FtiObject ftiObject = (FtiObject )e.UserState; Logger.log(ftiObject.FileNameOnHDD, (double)e.BytesSent ,e.TotalBytesToSend); } Log Output: Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Etc... I am watching the Network Traffic using a watcher, and only 1 request is being sent. Some how i cant Figure out why the callback is being called twice, my doubt was that the callback is getting fired by each thread opened(the main Upload , and TUpload), however i dont know how to test if thats the cause. Note: The reason behind the many /**/ Comments is to indicate that the functions do more than just opening threads, and threading is being used to make sure no blocking occurs (there a couple of "Signal.WaitOne()" around the code for synchronization)

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >