Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 82/92 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Django Admin: not seeing any app (permission problem?)

    - by Facundo
    I have a site with Django running some custom apps. I was not using the Django ORM, just the view and templates but now I need to store some info so I created some models in one app and enabled the Admin. The problem is when I log in the Admin it just says "You don't have permission to edit anything", not even the Auth app shows in the page. I'm using the same user created with syncdb as a superuser. In the same server I have another site that is using the Admin just fine. Using Django 1.1.0 with Apache/2.2.10 mod_python/3.3.1 Python/2.5.2, with psql (PostgreSQL) 8.1.11 all in Gentoo Linux 2.6.23 Any ideas where I can find a solution? Thanks a lot. UPDATE: It works from the development server. I bet this has something to do with some filesystem permission but I just can't find it. UPDATE2: vhost configuration file: <Location /> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE gpx.settings PythonDebug On PythonPath "['/var/django'] + sys.path" </Location> UPDATE 3: more info /var/django/gpx/init.py exists and is empty I run python manage.py from /var/django/gpx directory The site is GPX, one of the apps is contable and lives in /var/django/gpx/contable the user apache is webdev group and all these directories and files belong to that group and have rw permission UPDATE 4: confirmed that the settings file is the same for apache and runserver (renamed it and both broke) UPDATE 5: /var/django/gpx/contable/init.py exists This is the relevan part of urls.py: urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), ) urlpatterns += patterns('gpx', (r'^$', 'menues.views.index'), (r'^adm/$', 'menues.views.admIndex'),

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • How to enable and use HTTP PUT and DELETE with Apache2 and PHP?

    - by Andreas Jansson
    Hi, It should be so simple. I've followed every tutorial and forum I could find, yet I can't get it to work. I simply want to build a RESTful API in PHP on Apache2. In my VirtualHost directive I say: <Directory /> AllowOverride All <Limit GET HEAD POST PUT DELETE OPTIONS> Order Allow,Deny Allow from all </Limit> </Directory> Yet every PUT request I make to the server, I get 405 method not supported. Someone advocated using the Script directive, but since I use mod_php, as opposed to CGI, I don't see why that would work. People mention using WebDAV, but to me that seems like overkill. After all, I don't need DAV locking, a DAV filesystem, etc. All I want to do is pass the request on to a PHP script and handle everything myself. I only want to enable PUT and DELETE for the clean semantics. Thanks, Andreas

    Read the article

  • dynamically horizontal scalable key value store

    - by Zubair
    Hi, Is there a key value store that will give me the following: Allow me to simply add and remove nodes and will redstribute the data automatically Allow me to remove nodes and still have 2 extra data nodes to provide redundancy Allow me to store text or images up to 1GB in size Can store small size data up to 100TB of data Fast (so will allow queries to be performed on top of it) Make all this transparent to the client Works on Ubuntu/FreeBSD or Mac Free or open source I basically want something I can use a "single", and not have to worry about having memcached, a db, and several storage components so yes, I do want a database "silver bullet" you could say. Thanks Zubair Answers so far: MogileFS on top of BackBlaze - As far as I can see this is just a filesystem, and after some research it only seems to be appropriate for large image files Tokyo Tyrant - Needs lightcloud. This doesn't auto scale as you add new nodes. I did look into this and it seems it is very fast for queries which fit onto a single node though Riak - This is one I am looking into myself, but I don't have any results yet Amazon S3 - Is anyone using this as their sole persistance layer in production? From what I have seen it seems to be used for storage of images as complex queries are too expensive @shaman suggested Cassandra - definitely one I am looking into So far it seems that there is no database or key value store that fulfills the criteria I mentioned, not even after offering a bounty of 100 points did the question get answered!

    Read the article

  • Saving data - does work on Simulator, not on Device

    - by Peter Hajas
    I use NSData to persist an image in my application. I save the image like so (in my App Delegate): - (void)applicationWillTerminate:(UIApplication *)application { NSLog(@"Saving data"); NSData *data = UIImagePNGRepresentation([[viewController.myViewController myImage]image]); //Write the file out! NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *path_to_file = [documentsDirectory stringByAppendingString:@"lastimage.png"]; [data writeToFile:path_to_file atomically:YES]; NSLog(@"Data saved."); } and then I load it back like so (in my view controller, under viewDidLoad): NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *path_to_file = [documentsDirectory stringByAppendingString:@"lastimage.png"]; if([[NSFileManager defaultManager] fileExistsAtPath:path_to_file]) { NSLog(@"Found saved file, loading in image"); NSData *data = [NSData dataWithContentsOfFile:path_to_file]; UIImage *temp = [[UIImage alloc] initWithData:data]; myViewController.myImage.image = temp; NSLog(@"Finished loading in image"); } This code works every time on the simulator, but on the device, it can never seem to load back in the image. I'm pretty sure that it saves out, but I have no view of the filesystem. Am I doing something weird? Does the simulator have a different way to access its directories than the device? Thanks!

    Read the article

  • Getting error: "This webpage is not available" for my chrome app's options page

    - by Don Rhummy
    My CRX had the proper html page options.html in it, the manifest declares it properly (it shows up as a link on the chrome://extensions page) but when I click that link, Chrome gives the error: This webpage is not available The webpage at chrome-extension://invalid/ might be temporarily down or it may have moved permanently to a new web address. It says "invalid" but the app runs perfectly well (all the content scripts run, the background created a database and saved to it). Why would it show as invalid? Why doesn't it have the extensions' id? Here's the manifest: { "manifest_version": 2, "name": "MyAPP", "description": "My App", "version": "0.0.0.32", "minimum_chrome_version": "27", "offline_enabled": true, "options_page": "options.html", "icons": { "16": "images/icon16.png", "48": "images/icon48.png", "128": "images/icon128.png" }, "app": { "background": { "scripts": [ "scripts/background.js" ] } }, "permissions": [ "unlimitedStorage", "fullscreen", { "fileSystem": [ "write" ] }, "background", "<all_urls>", "tabs" ] } Does it need to be declared in "web_accessible_resources"? Any idea what's wrong? Update Adding to "web_accessible_resources" does not fix the issue. I added everything on that page too. update 2 It looks like it might be a Chrome bug for packaged apps. When I remove the "app" section in the manifest, it works! This is a bug since the Chrome app documentation states that apps can have options pages: https://developer.chrome.com/apps/options.html

    Read the article

  • WinForms GDI+ Polygon Events and custom shaped panels

    - by JD
    I have converted postcode boundary polygons to point data (point[] for each polygon) from GIS Shape Files. I am wanting to show this in a c# windows forms application. I have managed to show this using the System.Drawing (GDI+) DrawPolygon() method. Graphics g = this.CreateGraphics(); Pen pen = new Pen(Color.Black); Brush brush = new SolidBrush(Color.FromArgb(255,255,o)); PointF[] ptr = { point data here }; g.FillPolygon(brush, ptr); g.DrawPolygon(pen, ptr); Is it possible to add events to a drawn polygon? If so how do I do this for individual polygons. For example, click on a postcode polygon and a messagebox shows information about the postcode. Secondly, would it be easier to make a custom control inheriting the winforms panel. Is there a way to shape the border of a winforms panel control using a set of points? Postcode objects are serialised and stored in the filesystem.

    Read the article

  • Using a version control system as a data backend

    - by JacobM
    I'm involved in a project that, among other things, involves storing edits and changes to a large hierarchical document (HTML-formatted text). We want to include versioning of textual changes and of structural changes. Currently we're maintaining the tree of document sections in a relational database, but as we start working on how to manage versioning of structural changes, it's clear that we're in danger of having to write a lot of the functionality that a version control system provides. We don't want to reinvent the wheel. Is it possible that we could use an existing version control system as the data store, at least for the document itself? Presumably we could do so by writing out new versions to the filesystem, and keeping that directory under version control (and programmatically doing commits and so forth) but it would be better if we could directly interact with the repository via code. The VCS that we are most familiar with is Subversion, but I'm not thrilled with how Subversion represents changes to the directory structure -- it would be nice if we could see that a particular revision included moving a section from Chapter 2 to Chapter 6, rather than just seeing a new version of the tree. This sounds more like the way a system like Mercurial handles changes to the structure. Any advice? Do VCS's have public APIs and so forth? The project is in Java (with Spring) if it matters.

    Read the article

  • Overwriting lines in file in C

    - by KáGé
    Hi, I'm doing a project on filesystems on a university operating systems course, my C program should simulate a simple filesystem in a human-readable file, so the file should be based on lines, a line will be a "sector". I've learned, that lines must be of the same length to be overwritten, so I'll pad them with ascii zeroes till the end of the line and leave a certain amount of lines of ascii zeroes that can be filled later. Now I'm making a test program to see if it works like I want it to, but it doesnt. The critical part of my code: file = fopen("irasproba_tesztfajl.txt", "r+"); //it is previously loaded with 10 copies of the line I'll print later in reverse order /* this finds the 3rd line */ int count = 0; //how much have we gone yet? char c; while(count != 2) { if((c = fgetc(file)) == '\n') count++; } fflush(file); fprintf(file, "- . , M N B V C X Y Í U Á É L K J H G F D S A Ú O P O I U Z T R E W Q Ó Ü Ö 9 8 7 6 5 4 3 2 1 0\n"); fflush(file); fclose(file); Now it does nothing, the file stays the same. What could be the problem? Thank you.

    Read the article

  • Should I learn VB.NET or C#?

    - by Ravi
    Background I have decided to do my graduation project (yet to start) in .NET. Regarding it, I am bit confused about: what language should I learn: VB.NET or C#? What I have learnt from those who know it that both VB.NET and C# have: The same concepts VB.NET is simpler as it is more like English statements but also C# is simple too if you already know C (Which I do know) Question So considering some factors, e.g. career point of view, newness, challenging and beneficial, etc., what language should I choose? Please help me out. And clearly do justify your answer (whatever reason you have.) References (Extra) A little information about what project I am doing: It is a database file system. Technologies I'll be using are SQL Server, WPF, etc. I just love the concept of Database file system.So those who want to know more about Database file system, here are the links DBFS (This one is really good.Serves as primary reference for me) Towards A Single Folder Filesystem stackoverflow-What is a database file system? UPDATE1 : After some really good explained answers (actually all are good at their place), I have finally decided to go with C# for myself. Thank you all. Still, you are requested to put your opinion (Once it is reopened,of course) UPDATE2 : Question reopened and made community wiki.Thank you all.

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Simple way to embed an MP3 audio file in a (static) HTML file, starting to play at a specifc time?

    - by Marcel
    Hi all, I want to produce a simple, static HTML file, that has one or more embedded MP3 files in it. The idea is to have a simple mean of listening to specific parts of an mp3 file. On a single click on a visual element, the sound should start to play; However, not from the beginning of the file, but from a specified starting point in that file (and play to the end of the file). This should work all locally from the client's local filesystem, the HTML file and the MP3 files do not reside on a webserver. So, how to play the MP3 audio from a specific starting point? The solution I am looking for should be as simple as possible, while working on most browsers, including IE, Firefox and Safari. Note: I know the <embed> tag as described here, but this seems not to work with my target browsers. Also I have read about jPlayer and other Java-Script-based players, but since I have never coded some JavaScript, I would prefer a HTML-only solution, if possible.

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • navigateToURL with GET parameters in local SWF

    - by Michael Brewer-Davis
    I'm running a Flex application locally (local-with-filesystem or local-trusted), and I'm trying to call navigateToURL to a local page using GET parameters. Flash Player seems to be ignoring the parameters when opening the local page, though. I've been scouring the Flash security pages to find a documented prohibition for this, but haven't found anything. Pointers? How would you work around this issue? My Flex app: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ private function onClick(event:MouseEvent):void { var request:URLRequest = new URLRequest("target.html"); request.data = new URLVariables(); request.data.text = "stackoverflow.com"; navigateToURL(request); } ]]> </mx:Script> <mx:Button label="Go" click="onClick(event)" /> </mx:Application> And my target.html: <html> <head> <script language="JavaScript"> <!-- function showURL() { alert(window.location.href); } //--> </script> </head> <body onload="showURL()" /> </html>

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • How to Capture a live stream from Windows Media Server 2008

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • vs10 not deploying all required files - then not over-writing updated files

    - by justSteve
    I'm in the habit of deploying to alternating folders (/inetpub/wwwroot/mySite & /inetpub/wwwroot/mySite2) so if something unexpected happens with the deploy i can quickly swap back to a previous version just by changing the path in IIS So i was deploying an MVC2 webapp to a empty folder figuring that VS would send up all the files it needs. Not even close. Initially, it didn't even upload a couple required nHibernate.dlls. Later, after manually copying files referenced in the thrown exceptions, i just copied all the files from the previous compile and then re-published over the top expecting VS to over-write the changed files. Failed that too. No reports of errors by VS....just failed to over-write a number of pre-existing (but changed/updated) files. Hard to believe these kinds of errors (and lack of feedback that errors were encountered) in a state of the art tool like VS. Clearly, I'm doing something wrong. I'm using VisualSVN for source control and connect to my colocated server via a VPN-based mapped network drive (so I can use FileSystem to publish). (both of which can complicate file properties) VS08 had more choices for which files it would send up - i found i needed to use the 'All files in source' on an initial deployment, the 'Replace Matching'. If I choose 'delete all existing...' I'd be back to square 1 and have to deploy with the 'All files in source project folder'. But VS10 doesn't have the 'All files in source project folder. I ended up manually copying the files - which seems not right in the extreme. Are these known issues others have to deal with? What's best practice for deploying a web-app? thx

    Read the article

  • How to store a file on a server(web container) through a Java EE web application?

    - by Yatendra Goel
    I have developed a Java EE web application. This application allows a user to upload a file with the help of a browser. Once the user has uploaded his file, this application first stores the uploaded file on the server (on which it is running) and then processes it. At present, I am storing the file on the server as follows: try { FormFile formFile = programForm.getTheFile(); // formFile represents the uploaded file String path = getServlet().getServletContext().getRealPath("") + "/" + formFile.getFileName(); System.out.println(path); file = new File(path); outputStream = new FileOutputStream(file); outputStream.write(formFile.getFileData()); } where, the formFile represents the uploaded file. Now, the problem is that it is running fine on some servers but on some servers the getServlet().getServletContext().getRealPath("") is returning null so the final path that I am getting is null/filename and the file doesn't store on the server. When I checked the API for ServletContext.getRealPath() method, I found the following: public java.lang.String getRealPath(java.lang.String path) Returns a String containing the real path for a given virtual path. For example, the path "/index.html" returns the absolute file path on the server's filesystem would be served by a request for "http://host/contextPath/index.html", where contextPath is the context path of this ServletContext. The real path returned will be in a form appropriate to the computer and operating system on which the servlet container is running, including the proper path separators. This method returns null if the servlet container cannot translate the virtual path to a real path for any reason (such as when the content is being made available from a .war archive). So, Is there any other way by which I can store files on those servers also which is returning null for getServlet().getServletContext().getRealPath("")

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • How to Capture a live stream from Windows Media Server 2008 using c#.net

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. please help me with this. Thanks [Reference] Ref1=http://mywindowsmediaserver/test?MSWMExt=.asf Ref2=http://mywindowsmediaserver/test?MSWMExt=.asf FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >