Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 146/3118 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Most reliable method for uploading files in PHP w/ progress bar

    - by vanneto
    Hello everyone, I am interested in finding the most reliable method for uploading files in PHP. I need a progress bar with the upload. I have tried SWFUpload but it randomly issues an I/O Error. Even if the same file is uploaded sometimes there is an error and sometimes there is not. I have configured all the necessary INI/Mysql/Apache directives to accept large file uploads. So, I am looking for alternatives as a Flash based solution has not worked. Would Java be more relirable? I have also looked into PHP with APC. I definitely cannot afford these random errors, so any help on reliable software / suggestions on how to minimize them would be appreciated. Thank you.

    Read the article

  • paperclipt get error can't dump File when upload video in rails

    - by user3510728
    when i try to upload video using paperclipt, i get error message can't dump File? model video : class Video < ActiveRecord::Base has_attached_file :avatar, :storage => :s3, :styles => { :mp4 => { :geometry => "640x480", :format => 'mp4' }, :thumb => { :geometry => "300x300>", :format => 'jpg', :time => 5 } }, :processors => [:ffmpeg] validates_attachment_presence :avatar validates_attachment_content_type :avatar, :content_type => /video/, :message => "Video not supported" end when i try to create video, im get this error?

    Read the article

  • allow waiting user experience while file upload with rails and jquery

    - by poseid
    I am trying to display a waiting spinnger, while uploading a file. I am able to show the spinner, and to do the upload, when doing it individually. My problem is how to combine these two. The Jquery Javascript looks like: <% javascript_tag do %> function showLoading() { $("#loading").show(); } function hideLoading() { $("#loading").hide(); } function submitCallback() { showLoading(); $.post("create"); } <% end % My form looks like: <% semantic_form_for @face, :html => {:multipart => true} do |f| %> <%= f.error_messages %> <%= render 'fields', :f => f %> <p> <%= button_to_function 'create', "submitCallback()" %> </p> <% end %>

    Read the article

  • [Perl] Append a text File inside a Zip

    - by aleroot
    i have zip file inside a Text file (file.txt inside a file.zip) and i would have to append to this file another text file file.txt outside the zip file. How Can i do ? Is there a solution ? I've tried to add Append =1 parameters to IO::Compress::Zip but the file inside the zip been overwritten .. use IO::Compress::Zip qw(zip $ZipError) ; $filenameToZip = 'file.txt'; zip $filenameToZip => "file.zip",Append => 1 or die "zip failed: $ZipError\n"; Need i to decompress the zip, append/merge the two TXT file's and compress the file Again ? Or Is There a better Solution ?

    Read the article

  • Client to server data upload

    - by RickBowden
    I'm trying to design a system similar to the traditional server monitoring systems like MOM, Tivoli, Open View, where an agent will record data and then upload it to a central database once a day, but them also be able to send immediate alerts back to the server. I'm not sure what the best methodology might be for this. I've started looking at Microsoft sync services but I'm not sure if it will fit my needs. I'm using VS2008 and C#. Does anyone have any experience or ideas about how I should go about this task?

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • How to uploads to the web work on local networks

    - by Saif Bechan
    Let's say I have two computers hooked up as a home network. They both use the same router, and the router is hooked up to the to the net. Now lets say I am working on computer A, and I can access files on computer B. Computer A has a drive that is mounted on computer A as a network drive. Now I want to upload a file to a website. In the browser of computer A I open a browser, and go the website. On the website I select 'upload file', now in the file browser I go to the network drive, and select a file on computer B to upload. What happens in this case. Is the file uploaded directly from computer B to the website, or is the file first transferred to computer A, and then to the website.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Compiler issues on VC++ 2008 Express, Seemingly correct code throws errors.

    - by Anthony Clever
    Hi there, I've been trying to get back into coding for a while, so I figured I'd start with some simple SDL, now, without the file i/o, this compiles fine, but when I throw in the stdio code, it starts throwing errors. This I'm not sure about, I don't see any problem with the code itself, however, like I said, I might as well be a newbie, and figured I'd come here to get someone with a little more experience with this type of thing to look at it. I guess my question boils down to: "Why doesn't this compile under Microsoft's Visual C++ 2008 Express?" I've attached the error log at the bottom of the code snippet. Thanks in advance for any help. #include "SDL/SDL.h" #include "stdio.h" int main(int argc, char *argv[]) { FILE *stderr; FILE *stdout; stderr = fopen("stderr", "wb"); stdout = fopen("stdout", "wb"); SDL_Init(SDL_INIT_EVERYTHING); fprintf(stdout, "SDL INITIALIZED SUCCESSFULLY\n"); SDL_Quit(); fprintf(stderr, "SDL QUIT.\n"); fclose(stderr); fclose(stdout); return 0; } /* 1>------ Build started: Project: opengl_crap, Configuration: Debug Win32 ------ 1>Compiling... 1>main.cpp 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2556: 'FILE ***__iob_func(void)' : overloaded function differs only by return type from 'FILE *__iob_func(void)' 1> c:\program files\microsoft visual studio 9.0\vc\include\stdio.h(132) : see declaration of '__iob_func' 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(9) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(10) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(13) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(15) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(17) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(18) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>Build log was saved at "file://c:\Documents and Settings\Owner\My Documents\Visual Studio 2008\Projects\opengl_crap\opengl_crap\Debug\BuildLog.htm" 1>opengl_crap - 11 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== */

    Read the article

  • Youtube API upload - Incomplete Multipart body error

    - by Blerim J
    Hello, I'm trying to upload videos in Youtube through HttpWebRequest. Everything seems to be fine when uploading following the example given in API documentation. I see that request is being formed correctly, with content and token sent but I receive "Incomplete multipart body" as response. Thanks Blerim public bool YouTubeUpload() { string newLine = "\r\n"; //token and url are retrieved from YouTube at runtime. string token = string.Empty; string url = string.Empty; // construct the command url url = url + "?nexturl=http://www.mywebsite.com/"; // get a unique string to use for the data boundary string boundary = Guid.NewGuid().ToString().Replace("-", string.Empty); foreach (string file in Request.Files) { HttpPostedFileBase hpf = Request.Files[file] as HttpPostedFileBase; if (hpf.ContentLength == 0) continue; // get info about the file and open it for reading Stream fs = hpf.InputStream; HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url); webRequest.ContentType = "multipart/form-data; boundary=" + boundary; webRequest.Method = "POST"; webRequest.KeepAlive = true; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; MemoryStream memoryStream = new MemoryStream(); StreamWriter writer = new StreamWriter(memoryStream); //token writer.Write("--" + boundary + newLine); writer.Write("Content-Disposition: form-data; name=\"{0}\"{1}{2}", "token", newLine, newLine); writer.Write(token); writer.Write(newLine); //Video writer.Write("--" + boundary + newLine); writer.Write("Content-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"{2}", "File1", hpf.FileName, newLine); writer.Write("Content-Type: {0}" + newLine + newLine, hpf.ContentType); writer.Flush(); byte[] boundarybytes = System.Text.Encoding.ASCII.GetBytes(string.Format("--{0}--{1}", boundary, newLine)); webRequest.ContentLength = memoryStream.Length + fs.Length + boundarybytes.Length; Stream webStream = webRequest.GetRequestStream(); // write the form data to the web stream memoryStream.Position = 0; byte[] tempBuffer = new byte[memoryStream.Length]; memoryStream.Read(tempBuffer, 0, tempBuffer.Length); memoryStream.Close(); webStream.Write(tempBuffer, 0, tempBuffer.Length); // write the file to the stream int size; byte[] buf = new byte[1024 * 10]; do { size = fs.Read(buf, 0, buf.Length); if (size > 0) webStream.Write(buf, 0, size); } while (size > 0); // write the trailer to the stream webStream.Write(boundarybytes, 0, boundarybytes.Length); webStream.Close(); fs.Close(); //fails here. Error - Incomplete multipart body. WebResponse webResponse = webRequest.GetResponse(); } return true; }

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • Add Intellisense when using the URL rewritingnet config file

    - by Vizioz Limited
    I often use the URL re-writing engine that comes with Umbraco which is from urlrewriting.net and I have always found it very fiddly to edit the configuration file, I wish I have know it was possible to add Intellisense to Visual Studio, and I guessed that most people would also not realise this, after all, who reads the manual right?!So, if you are someone who edits the urlrewriting.config without Intellisense, but would like to use it, this is how you do it :)1) Download the URL rewriting source code files from urlrewriting.net2) Unzip the source files and find the urlwritingnet.xsd file, put this file into your web project, or the directory where your urlredirect.config lives.3) Open up the web project and then open your config file, and hey presto! You should find you now have intellisense!So, the next question is, are there XSD files for the rest of the Umbraco config files, and more importantly for the Umbraco.xml file? If not, does anyone fancy creating them? I am sure intellisense for all these files would be very helpful :)

    Read the article

  • Problem converting FBX file into XNB

    - by Dado
    I create a Monogame Content Project to convert assets into XNB. For FBX file without texture there is no problem: the file is correctly converted and when I load XNB into my project everything is ok. The problem occours when i have associated to fbx file a texture map: in this case both FBX and PNG files are converted to XNB but when i try to load these XNB files into my project the following problem occours: "ContentLoadException: Could not load Models/maze1 asset as a non-content file!" Note: maze1 is the XNB file that was converted from FBX. How can I solve this problem? Thank you in advance

    Read the article

  • From the Tips Box: Pin Any File to the Windows 7 Taskbar

    - by Jason Fitzpatrick
    Every week we dip into the tip box and share the tips you send in. This week we’re highlighting a great tip and the accompanying tutorial video that shows you how to pin any file to the Windows 7 taskbar. Robert Jasinski writes in with a clever way to pin any file you want to the task bar. By default if you drag a text document to the taskbar it will pin it to the Notepad executable—the same thing happens with any other file that has an association with an executable. What if you want to pin that specific text file to the taskbar and not to the executable (or any other file for that matter)? Robert shares his method:  What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • SLK opens SCORM package as a ZIP file

    - by Cherie Riesberg
    Symptom: After installing SharePoint Learning Kit successfully, (http://www.codeplex.com/SLK), everything works except that the SCORM package (a ZIP extension) is opening as a ZIP file instead of a course. You get the normal ZIP message "Do you want to open or save this file?" Problem: The package is zipped at the upper folder level and does not create a manifest that allows SharePoint to recognize it as a SCORM file instead of a ZIP file. Solution: Add the contents of the course to the ZIP, not the outer (uppermost) folder.  This creates a ZIP file that SharePoint can recognize as a SCORM package.

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Only 192.168.0.3 can request, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • OS X Mavericks Won't Connect To Ubuntu Server (Netatalk, Avahi)

    - by Andy Ibanez
    I'm really sorry for posting this. I know it may have been asked a thousand times. I have googled like crazy and I'm on the verge of desperation here. Basically, I followed this guide: http://motionsoundfx.com/2012/05/ubuntu-vnc-afp-macosx/ To create a small personal file server. When I installed it, I was able to connect to it just fine, I connected with my Ubuntu username and password and I was able to see the home directory. But later, I had to restart the file server so I could prepare a couple of other hard drives to put in. When the server restarted, I tried to connect to it, but I got an error message on my Mac: "The version of the server you're trying to connect to is not supported. Please contact your system administrator to solve this problem." Again, I have googled like crazy for this, and everybody says it is a problem with OS X Lion and up (assuming it affects Mavericks too). I have tried all the fixes mentioned for Lion and Mountain Lion and I haven't had any luck. That's the reason I'm posting this here: I suspect the problem is with my Ubuntu server. This happened after I restarted the server. Before restarting the server, I just put in my credentials and saw my home directory. Something when I restarted the server must have been messed up. I have found some other solutions, including to use "SHX2" in the conf file, but it hasn't worked for me. I ask for your help to solve this issue. Also please understand I'm completely illiterate when it comes to Linux. This is a nice chance to me to learn the OS so please give me detailed steps to do things if you deem it necessary. Thank you! I'm using Ubuntu Server 13.10 (the latest one as of today).

    Read the article

  • Changing the sequencing strategy for File/Ftp Adapter

    - by [email protected]
    The File/Ftp Adapter allows the user to configure the outbound write to use a sequence number. For example, if I choose address-data_%SEQ%.txt as the FileNamingConvention, then all my files would be generated as address-data_1.txt, address-data_2.txt,...and so on. But, where does this sequence number come from? The answer lies in the "control directory" for the particular adapter project(or scenario). In general, for every project that use the File or Ftp Adapter, a unique directory is created for book keeping purposes. And since this control directory is required to be unique, the adapter uses a digest to make sure that no two control directories are the same. For example, for my FlatStructure sample, the control information for my project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another. If you look under this directory, you will see a file control_ob.properties and this is where the sequence number is maintained. Please note that the sequence number is maintained in binary form and you hence you might need a hex editor to view its content. You will also see another zero byte file, SEQ_nnn, but, ignore that for now. We'll get to it some other time. For now, please remember that this extra file is maintained as a backup. One of the challenges faced by the adapter runtime is to guard all writes to the control files so no two threads inadverently try to update them at the same time. And, it does so with the help of a "Mutex". For now, please remember that the mutex comes in different flavors: In-memory DB-based Coherence-based User-defined Again, we will talk about these mutexes some other time. Please note that there might be scenarios, particularly under heavy load, where the mutex might become a bottleneck. The adapter, however,  allows you to change the configuration so that the adapter sequence value comes from a database sequence or a stored procedure and in such situation, the mutex is acually by-passed and thereby resulting in better throughputs. In later releases, the behavior of the adapter would be defaulted to use a db-sequence.  The simplest way to achieve this is by switching your JNDI for the outbound JCA file to use "eis/HAFileAdapter" as shown   But, what does this do? Internally, the adapter runtime creates a sequence on the oracle database. For example, if you do a "select * from user_sequences" in your soa-infra schema, you will see a new sequence being created with name as SEQ_<GUID>__ where the GUID will differ from one project to another. However, if you want to use your own sequence, then it would require you to add a new property to your JCA file called SequenceName as shown below. Please note that you will need to create this sequence on your soainfra schema beforehand.     But, what if we use DB2 or MSSQL Server as the dehydration support? DB2 supports sequences natively but MSSQL Server does not. So, the adapter runtime uses a natively generated sequence for DB2, but, for MSSQL server, the adapter relies on a stored procedure that ships with the product. If you wish to achieve the same result for SOA Suite running DB2 as the dehydration store, simply change your connection factory JNDI name in the JCA file to eis/HAFileAdapterDB2 and for MSSQL, please use eis/HAFileAdapterMSSQL. And, if you wish to use a stored procedure other than the one that ships with the product, you will need to rely on binding properties to override the adapter behavior. Particularly, you will need to instruct the adapter that you wish to use a stored procedure as shown:       Please note that if you're using the File/Ftp Adapter in Append mode, then the adapter runtime degrades the mutex to use pessimistic locks as we don't want writers from different nodes to append to the same file at the same time.                    

    Read the article

  • Using a :default for file names on include templates in SMARTY 3 [closed]

    - by Yohan Leafheart
    Hello everyone, Although I don't think the question was as good as it could be, let me try to explain better here. I have a site using SMARTY 3 as the template system. I have a template structure similar to the below one: /templates/place1/inner_a.tpl /templates/place1/inner_b.tpl /templates/place2/inner_b.tpl /templates/place2/inner_c.tpl /templates/default/inner_a.tpl /templates/default/inner_b.tpl /templates/default/inner_c.tpl These are getting included on the parent template using {include file="{$temp_folder}/{$inner_template}"} So far great. What I wanted to do is having a default for, in the case that the file "{$temp_folder}/{$inner_template}" does not exists, it uses the equivalent file at "default/{$inner_template}". i.e. If I do {include file="place1/inner_c.tpl"}, since that file does not exists it in fact includes "default/inner_c.tpl" Is it possible?

    Read the article

  • How to attach WAR file in email from jenkins

    - by birdy
    We have a case where a developer needs to access the last successfully built WAR file from jenkins. However, they can't access the jenkins server. I'd like to configure jenkins such that on every successful build, jenkins sends the WAR file to this user. I've installed the ext-email plugin and it seems to be working fine. Emails are being received along with the build.log. However, the WAR file isn't being received. The WAR file lives on this path in the server: /var/lib/jenkins/workspace/Ourproject/dist/our.war So I configured it under Post build actions like this: The problem is that emails are sent but the WAR file isn't being attached. Do I need to do something else?

    Read the article

  • Removal of libsound2 file causes graphics loss

    - by Sajid Ahmad
    I was trying to install skype on ubuntu 12.10 desktop. but it was giving some error related to libsound2:i386 file. to overcome this problem i removed file libsound2 thinking that will install it later. but it removes all the graphics from my system. after removal of the file system started to give error that system is running in low graphics mode. I tried to install libsound2 file again but couldn't. After it i have upgraded the release of my ubuntu version using command do-release-upgrade think that it will install the missing file. But still there are no graphics on the system. I am using Dell Inspiron 15 . Please help me to tell that how can i get the graphics of system back.

    Read the article

  • How to Search File Contents in Windows Server 2008 R2

    - by ybbest
    By default, windows search only search by File name. To configure windows search to search by contents you need configure the following: You need to make sure Windows Search Services feature is activated.(Check this article for details) Then, configure Windows Search by Open file explorer: Press Alt button –> go to tools –> Folder options –> search tab –> Here select, “Always search file names and content(this might take several minutes)” Press okay. Now your searches will work for file content like the good old days of XP. Another way to search the contents in file without Search configuration is to Type “contents:” in the Windows Explorer search box followed by the word, searches text files. This is a search filter which seems to be undocumented?

    Read the article

  • Default file permissions for php user www-data

    - by John Isaacks
    I have a php installed on my ubuntu machine. The web root is /var/www I set the permissions for this folder like so: sudo chown -R ftpuser:www-data /var/www ftpuser is the user I set up so I can ftp to /var/www from another machine on the network. www-data is the user php uses. I double checked using whoami from php. Whenever I ftp upload a new file to the machine the group has no permissions to the file. So when I try to access it in my browser via machine-name/new-file.php I am told permission denied and I have to go and chmod the new file. I am wondering if there is a way I can default the www-data user/group to have access permissions to new files so I don't have to keep chmod every new file?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >