Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 16/260 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZSF version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • How to trigger chef-client on all nodes from my workstation

    - by divyanshm
    I have 5 nodes and all of them have one setup cook-book in common. Now I would like to add another task in this common cookbook that would configure SQL server for me on all the nodes. Is there a way/command to manually trigger this change across all clients right away? I use azure VM's. All the nodes are Windows Server 2012 machines. I could do a knife winrm machine-name chef-client -m -x username -P password on all the machines, but i'm sure there should be a better way of doing this. I'm new to using chef, so I might be missing a very basic command here.

    Read the article

  • Oracle: How can I know a table is getting populated?

    - by Tami
    Hi, I'm in charge of an oracle db where we don't have any documentation (at all). And at the moment I need to know HOW a table is getting populated. Ideally, I'd like to know from which procedure, trigger, whatever... this table gets its data from. Any idea would be much appreciated. Thanks.

    Read the article

  • Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('relo

    - by Oleg
    Hi everybody who use jqGrid! I am a new on stackoverflow.com and it seems to me that a lot of peoples who use stackoverflow.com are not only the persons who have a problem which must be quickly solved. A lot of people read stackoverflow.com to look at well-known things from the other side. Sometime perhaps the reason is a self-training (to stay in the good form) during solving of problems other people. For all these gays, who not want only to solve his problem is my question. I wrote recently an answer to the question "jqGrid display default “loading” message when updating a table / on custom update". During writing of the answer I thought: why he uses addJSONData() function for refresh of data in the grid instead of changing URL with respect of setGridParam() and refreshing jqGrid data with respect of trigger('reloadGrid')? At the beginning I wanted to recommend using of 'reloadGrid', but after thinking about this I understood, that I am not quite sure what the best way is. At least I can't explain in two sentences why I prefer the second way. So I decide that it could be an interesting subject of a discussion. So to be exactly: We have a typical situation. We have a web page with at least one jqGrid and some other controls like combo-boxes (selects), checkboxes etc. which give user possibilities to change scope on information displayed in a jqGrid. Typically we define some event handler like jQuery("#selector").change(myRefresh).keyup(myKeyRefresh) and we need reload the jqGrid contain based on users choose. After reading and analyzing of the information from additional users input we can refresh jqGrid contain in at least two ways: Make call of $.ajax() manual and then inside of success or complete handle of $.ajax call jQuery.parseJSON() (or eval) and then call addJSONData function of jqGrid. I found a lot of examples on stackoverflow.com who use addJSONData. Update url of jqGrid based on users input, reset current page number to 1 and optionally change the caption of the grid. All these can be done with respect of setGridParam(), and optionally setCaption() jqGrid methods. At the end one call trigger('reloadGrid') method of the grid. To construct the url, by the way I use mostly jQuery.param function to be sure, that I all url parameters packed correctly with respect of encodeURIComponent. I want that we discuss advantages and disadvantages of both ways. I use currently the second way, so I start with advantages of this one. One can say me: I call existing Web Service, convert received data to the jqGrid format and call addJSONData. This is the reason why I use addJSONData method! OK, I choose another way. jqGrid can make a call of the Web Service directly and fill results inside of grid. There are a lot of jqGrid options, which allow you to customize this process. First of all, one can delete or rename any standard parameter sent to server with respect of prmNames option of jqGrid or add any more additional parameters with respect of postData option (see http://www.trirand.com/jqgridwiki/doku.php?id=wiki:options). One can modify all constructed parameters immediately before jqGrid makes corresponding $.ajax request by defining of serializeGridData() function (one more option of jqGrid). More than that, one can change every $.ajax parameter by setting ajaxGridOptions option of jqGrid. I use ajaxGridOptions: {contentType: "application/json"} for example as a general setting of $.jgrid.defaults. The ajaxGridOptions option is very powerful. With respect of ajaxGridOptions option one can redefine any parameter of $.ajax request sending by jqGrid, like error, complete and beforeSend events. I see potentially interesting to define dataFilter event to be able makes any modification of the row data responded from the server. One more argument for using of trigger('reloadGrid') way is blocking of jqGrid during ajax request processing. Mostly I use parameter loadui: 'block' to block jqGrid during JSON request sending to server. With respect of jQuery blockUI plugin http://malsup.com/jquery/block/ one can block more parts of web page as the grid only. To do this one can call jQuery('#main').block({ message: '<h1>Die Daten werden vom Server geladen...</h1>' }); before calling of trigger('reloadGrid') method and jQuery('#main').unblock() inside of loadComplete and loadError functions. The loadui option could be set to 'disable' in this case. So I don’t see why the function addJSONData() should be used. Can somebody who use addJSONData() function explain me advantages of its usage? Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('reloadGrid')? I am opened to the discussion.

    Read the article

  • How to trigger an action from a NSTableCellView in view based NSTableView when using bindings

    - by user1075752
    I'm facing a problem with a view-based NSTableView running on 10.8 (target is 10.7, but I think this is not relevant). I'm using an NSTableView, and I get content values for my custom NSTableCellView through bindings. I use the obejctValue of the NSTableCellView to get my data. I added a button to my cell, and I'd like it to trigger some action when clicked. So far I have only been able to trigger an action within the custom NSTableCellView's subclass. I can get the row that was clicked like this, using the chain: NSButton *myButton = (NSButton*)sender; NSTableView *myView = (NSTableView*)myButton.superview.superview.superview; NSInteger rowClicked = [myView rowForView:myButton.superview]; From there I don't know how to reach my App Delegate or controller where the action is defined. As I am using cocoa bindings, I do not have a delegate on the NSTableView that I could use to trigger my action. Do you have any idea how I could talked back to controller ? Many thanks in advance!

    Read the article

  • How to test IO code in JUnit?

    - by add
    I'm want to test two services: service which builds file name service which writes some data into file provided by 1st service In first i'm building some complex file structure (just for example {user}/{date}/{time}/{generatedId}.bin) In second i'm writing data to the file passed by first service (1st service calls 2nd service) How can I test both services using mocks without making any real IO interractions? Just for example: 1st service: public class DefaultLogService implements LogService { public void log(SomeComplexData data) { serializer.write(new FileOutputStream(buildComplexFileStructure()), data); or serializer.write(buildComplexFileStructure(), data); or serializer.write(new GenericInputEntity(buildComplexFileStructure()), data); } private ComplextDataSerializer serializer; // mocked in tests } 2nd service: public class DefaultComplexDataSerializer implements ComplexDataSerializer { void write(InputStream stream, SomeComplexData data) {...} or void write(File file, SomeCompexData data) {...} or void write(GenericInputEntity entity, SomeComplexData data) {...} } In first case i need to pass FileOutputStream which will create a file (i.e. i can't test 1st service) In second case i need to pass File. What can i do in 2nd service test if I need to test data which will be written to specified file? (i can't test 2nd service) In third case i think i need some generic IO object which will wrap File. Maybe there is some ready-to-use solution for this purpose?

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • What fails in this row level trigger?

    - by newba
    I have this trigger: create or replace trigger t_calctotal after insert or update on item_fornecimento REFERENCING NEW AS NEW OLD AS OLD for each row begin if inserting then dbms_output.put_line(' On Insert'); update fornecimento f set f.total_enc_fornec = f.total_enc_fornec +:NEW.prec_total_if where f.id_fornecimento = :NEW.id_fornecimento; else dbms_output.put_line(' On Update'); update fornecimento f set f.total_enc_fornec = f.total_enc_fornec - :OLD.prec_total_if +:NEW.prec_total_if where f.id_fornecimento = :NEW.id_fornecimento; end if; end; Basically I want to refresh the total value of an order (fornecimento), by suming all the items in item_fornecimento; I have to treat this in a different way, case it's an inserting, case it's an updating. The trigger compiles and all and even worked one time, but it was the only one. I've inserted or updated my prec_total_if in item_fornecimento in sqldeveloper, but the order's (fornecimento) total still not change :(. If it's important, my f.total_enc_fornec it's null until it's replaced by a value inserted by this triggers; it prints the output, but it seems to fail updating.

    Read the article

  • Java Process.waitFor() and IO streams

    - by lynks
    I have the following code; String[] cmd = { "bash", "-c", "~/path/to/script.sh" }; Process p = Runtime.getRuntime().exec(cmd); PipeThread a = new PipeThread(p.getInputStream(), System.out); PipeThread b = new PipeThread(p.getErrorStream(), System.err); p.waitFor(); a.die(); b.die(); The PipeThread class is quite simple so I will include it in full; public class PipeThread implements Runnable { private BufferedInputStream in; private BufferedOutputStream out; public Thread thread; private boolean die = false; public PipeThread(InputStream i, OutputStream o) { in = new BufferedInputStream(i); out = new BufferedOutputStream(o); thread = new Thread(this); thread.start(); } public void die() { die = true; } public void run() { try { byte[] b = new byte[1024]; while(!die) { int x = in.read(b, 0, 1024); if(x > 0) out.write(b, 0, x); else die(); out.flush(); } } catch(Exception e) { e.printStackTrace(); } try { in.close(); out.close(); } catch(Exception e) { } } } My problem is this; p.waitFor() blocks endlessly, even after the subprocess has terminated. If I do not create the pair of PipeThread instances, then p.waitFor() works perfectly. What is it about the piping of io streams that is causing p.waitFor() to continue blocking? I'm confused as I thought the IO streams would be passive, unable to keep a process alive, or to make Java think the process is still alive.

    Read the article

  • jquery how to determine event triggered with .trigger() or with .event()

    - by Tony_M
    I have a selectbox. <select onchange="javascript:changePropertyDropdown('3','0','0','1',this.value,'80','50'); hideCart()" size="1" class="inputbox" id="property_id_prd_3_0_1" name="property_id_prd_3_0_1[]"> <option selected="selected" value="0">Select an option</option> <option value="1">38</option> <option value="2">40</option> <option value="3">42</option> <option value="4">43</option> </select> and some button which triggered change event for selectbox : $('div.attribute_wrapper select').bind('cascadeSelect',function(e, pAttr){ $(this).val(pAttr); }); Call it like this (prodAttr come with ajax): $('div.productImgGallery img').click(function(){ $('div.attribute_wrapper select').trigger('change'); }; $('div.attribute_wrapper select').change(function(){ $(this).trigger('cascadeSelect',prodAttr); }); When i call it like this, hideCart() function fires too. I need to call function changePropertyDropdown() only whith .trigger(), and changePropertyDropdown() + hideCart() on change event. How can i do this ?

    Read the article

  • Zabbix Trigger for SELinux (type=AVC) Errors

    - by Kevin Soviero
    I would like to create a trigger in Zabbix to alert me anytime a type=AVC error appears in a CentOS 6 server's /var/log/audit/audit.log file. I've already tried creating a basic log scrape. E.g.: log[/var/log/audit/audit.log,type=AVC,"UTF-8",100] However, it does not work. I believe this is due to the /var/log/audit/audit.log and it's parent folder using the following permissions: drwxr-x---. 2 root root 4096 Apr 20 04:29 . drwxr-xr-x. 13 root root 4096 Apr 14 12:07 .. -rw-------. 1 root root 5948185 Apr 20 15:27 audit.log -r--------. 1 root root 6291566 Apr 20 04:29 audit.log.1 -r--------. 1 root root 6291704 Apr 19 16:56 audit.log.2 -r--------. 1 root root 6291499 Apr 19 05:22 audit.log.3 -r--------. 1 root root 6291552 Apr 18 17:48 audit.log.4 I would prefer not to change the permissions for security reasons. Has anyone done log monitoring of /var/log/audit/audit.log using Zabbix? And if so, how?

    Read the article

  • HttpURLConnection! Connection.getInputStream is java.io.FileNotFoundException

    - by user3643283
    I created a method "UPLPAD2" to upload file to server. Splitting my file to packets(10MB). It's OK (100%). But when i call getInputStream, i get FileNotFoundException. I think, in loop, i make new HttpURLConnection to set "setRequestProperty". This is a problem. Here's my code: @SuppressLint("NewApi") public int upload2(URL url, String filePath, OnProgressUpdate progressCallBack, AtomicInteger cancelHandle) throws IOException { HttpURLConnection connection = null; InputStream fileStream = null; OutputStream out = null; InputStream in = null; HttpResponse response = new HttpResponse(); Log.e("Upload_Url_Util", url.getFile()); Log.e("Upload_FilePath_Util", filePath); long total = 0; try { // Write the request. // Read from filePath and upload to server (url) byte[] buf = new byte[1024]; fileStream = new FileInputStream(filePath); long lenghtOfFile = (new java.io.File(filePath)).length(); Log.e("LENGHT_Of_File", lenghtOfFile + ""); int totalPacket = 5 * 1024 * 1024; // 10 MB int totalChunk = (int) ((lenghtOfFile + (totalPacket - 1)) / totalPacket); String headerValue = ""; String contentLenght = ""; for (int i = 0; i < totalChunk; i++) { long from = i * totalPacket; long to = 0; if ((from + totalPacket) > lenghtOfFile) { to = lenghtOfFile; } else { to = (totalPacket * (i + 1)); } to = to - 1; headerValue = "bytes " + from + "-" + to + "/" + lenghtOfFile; contentLenght = "Content-Length:" + (to - from + 1); Log.e("Conten_LENGHT", contentLenght); connection = client.open(url); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Range", headerValue); connection.setRequestProperty("Content-Length", Long.toString(to - from + 1)); out = connection.getOutputStream(); Log.e("Lenght_Of_File", lenghtOfFile + ""); Log.e("Total_Packet", totalPacket + ""); Log.e("Total_Chunk", totalChunk + ""); Log.e("Header_Valure", headerValue); int read = 1; while (read > 0 && cancelHandle.intValue() == 0 && total < totalPacket * (i + 1)) { read = fileStream.read(buf); if (read > 0) { out.write(buf, 0, read); total += read; progressCallBack .onProgressUpdate((int) ((total * 100) / lenghtOfFile)); } } Log.e("TOTAL_", total + "------" + totalPacket * (i + 1)); Log.e("I_", i + ""); Log.e("LENGHT_Of_File", lenghtOfFile + ""); if (i < totalChunk - 1) { connection.disconnect(); } out.close(); } // Read the response. response.setHttpCode(connection.getResponseCode()); in = connection.getInputStream(); // I GET ERROR HERE. if (connection.getResponseCode() != HttpURLConnection.HTTP_OK) { throw new IOException("Unexpected HTTP response: " + connection.getResponseCode() + " " + connection.getResponseMessage()); } byte[] body = readFully(in); response.setBody(body); response.setHeaderFields(connection.getHeaderFields()); if (cancelHandle.intValue() != 0) { return 1; } JSONObject jo = new JSONObject(response.getBodyAsString()); Log.e("Upload_Body_res_", response.getBodyAsString()); if (jo.has("error")) { if (jo.has("code")) { int errCode = jo.getInt("code"); Log.e("Upload_Had_errcode", errCode + ""); return errCode; } else { return 504; } } Log.e("RESPONE_BODY_UPLOAD", response.getBodyAsString() + ""); return 0; } catch (Exception e) { e.printStackTrace(); Log.e("Http_UpLoad_Response_Exception", e.toString()); response.setHttpCode(connection.getResponseCode()); Log.e("ErrorCode_Upload_Util_Return", response.getHttpCode() + ""); if (connection.getResponseCode() == 200) { return 1; } else if (connection.getResponseCode() == 0) { return 1; } else { return response.getHttpCode(); } // Log.e("ErrorCode_Upload_Util_Return", response.getHttpCode()+""); } finally { if (fileStream != null) fileStream.close(); if (out != null) out.close(); if (in != null) in.close(); } } And Logcat 06-12 09:39:29.558: W/System.err(30740): java.io.FileNotFoundException: http://download-f77c.fshare.vn/upload/NRHAwh+bUCxjUtcD4cn9xqkADpdL32AT9pZm7zaboHLwJHLxOPxUX9CQxOeBRgelkjeNM5XcK11M1V-x 06-12 09:39:29.558: W/System.err(30740): at com.squareup.okhttp.internal.http.HttpURLConnectionImpl.getInputStream(HttpURLConnectionImpl.java:187) 06-12 09:39:29.563: W/System.err(30740): at com.fsharemobile.client.HttpUtil.upload2(HttpUtil.java:383) 06-12 09:39:29.563: W/System.err(30740): at com.fsharemobile.fragments.ExplorerFragment$7$1.run(ExplorerFragment.java:992) 06-12 09:39:29.568: W/System.err(30740): at java.lang.Thread.run(Thread.java:856) 06-12 09:39:29.568: E/Http_UpLoad_Response_Exception(30740): java.io.FileNotFoundException: http://download-f77c.fshare.vn/upload/NRHAwh+bUCxjUtcD4cn9xqkADpdL32AT9pZm7zaboHLwJHLxOPxUX9CQxOeBRgelkjeNM5XcK11M1V-x

    Read the article

  • Regular expression replace in PL/pgSQL

    - by dreamlax
    If I have the following input (excluding quotes): "The ancestral territorial imperatives of the trumpeter swan" How can I collapse all multiple spaces to a single space so that the input is transformed to: "The ancestral territorial imperatives of the trumpeter swan" This is going to be used in a trigger function on insert/update (which already trims leading/trailing spaces). Currently, it raises an exception if the input contains multiple adjacent spaces, but I would rather it simply transforms it into something valid before inserting. What is the best approach? I can't seem to find a regular-expression replace function for PL/pgSQL. There is a text_replace function, but this will only collapse at most two spaces down to one (meaning three consecutive spaces will collapse to two). Calling this function over and over is not ideal.

    Read the article

  • Automatically CONCATENATE text on data entry

    - by Bill T
    I am a newbie and need help. I have a table called "Employees". It has 2 fields [number] and [encode]. I want to automatically take whatever number is entered into [number] and store it in [encode] so that it is preceded by the appropriate amount of 0's to always make 12 digits. Example: user enters '123' into [number], '000000000123' is automatically stored in [encode] user enters '123456789' into [number], '000123456789' is automatically stored in [encode] I think i want to write a trigger to accomplish this. I think that would make it happen at the time of data entry. is that right? The main idea is would be something like this: variable1 = LENGTH [number] variable2 = REPEAT (0,12-variable1) variable3 = CONCATENATE (variable2, [number]) [encode] = variable3 I just don't know enough to make this happen ANY help would be FANTASTIC. I have SQL-SERVER 2005 and both fields are text

    Read the article

  • MVVM Light Toolkit throws an System.IO.FileLoadException

    - by joebeazelman
    I'm running VS 2010 along with Expression Blend 4 beta. I created a MVVM Light project from the supplied templates and I get a System.IO.FileLoadException when I try to view the MainWindow.Xaml in VS 2010 designer window. The template already references System.Windows.Interactivity. Here are the details of the exception: System.IO.FileLoadException Could not load file or assembly 'System.Windows.Interactivity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.Assembly.Load(AssemblyName assemblyRef) at MS.Internal.Package.VSIsolationProviderService.RemoteReferenceProxy.VsReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.CachingReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.Microsoft.Windows.Design.Metadata.IReflectionResolver.GetRuntimeAssembly(Assembly reflectionAssembly) at MS.Internal.Metadata.ClrAssembly.GetRuntimeMetadata(Object reflectionMetadata) at Microsoft.Windows.Design.Metadata.AttributeTableContainer.d_c.MoveNext() at Microsoft.Windows.Design.Metadata.AttributeTableContainer.GetAttributes(Assembly assembly, Type attributeType, Func`2 reflectionMapper) at MS.Internal.Metadata.ClrAssembly.GetAttributes(ITypeMetadata attributeType) at MS.Internal.Design.Metadata.Xaml.XamlAssembly.get_XmlNamespaceCompatibilityMappings() at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensionImplementations.GetXmlNamespaceCompatibilityMappings(IAssemblyMetadata sourceAssembly) at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensions.GetXmlNamespaceCompatibilityMappings(IAssemblyMetadata source) at MS.Internal.Design.Metadata.ReflectionProjectNode.BuildSubsumption() at MS.Internal.Design.Metadata.ReflectionProjectNode.SubsumingNamespace(Identifier identifier) at MS.Internal.Design.Markup.XmlElement.BuildScope(PrefixScope parentScope, IParseContext context) at MS.Internal.Design.Markup.XmlElement.ConvertToXaml(XamlElement parent, PrefixScope parentScope, IParseContext context, IMarkupSourceProvider provider) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.FullParse(Boolean convertToXamlWithErrors) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.get_RootItem() at Microsoft.Windows.Design.DocumentModel.Trees.ModifiableDocumentTree.get_ModifiableRootItem() at Microsoft.Windows.Design.DocumentModel.MarkupDocumentManagerBase.get_LoadState() at MS.Internal.Host.PersistenceSubsystem.Load() at MS.Internal.Host.Designer.Load() at MS.Internal.Designer.VSDesigner.Load() at MS.Internal.Designer.VSIsolatedDesigner.VSIsolatedView.Load() at MS.Internal.Designer.VSIsolatedDesigner.VSIsolatedDesignerFactory.Load(IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.LoadDesigner(IsolatedDesignerFactory factory, IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.BootstrapProxy.LoadDesigner(IsolatedDesignerFactory factory, IsolatedView view) at MS.Internal.Host.Isolation.IsolatedDesigner.Load() at MS.Internal.Designer.DesignerPane.LoadDesignerView() System.NotSupportedException An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information.

    Read the article

  • java.io.IOException: Bad DNS address - in opening a HttpConnection

    - by Shreyas
    Hi, I m opening a HttpConnection to a URL. Its working in simulator but when i try it in device, it gives "java.io.IOException: Bad DNS address" while opening the HttpConnection. I serached the forums but havent got the solution yet. That URL is opening in Blackberry device Internet Browser but not getting the HttpConnection (HttpConnection is coming NULL) when i try through my code.

    Read the article

  • Does Python's shelve module use memory-mapped IO?

    - by Matt Luongo
    Does anyone know if Python's shelve module uses memory-mapped IO? Maybe that question is a bit misleading. I realize that shelve uses an underlying dbm-style module to do its dirty work. What are the chances that the underlying module uses mmap? I'm prototyping a datastore, and while I realize premature optimization is generally frowned upon, this could really help me understand the trade-offs involved in my design.

    Read the article

  • Read/Write operation on Fat table with out using System.IO in C#

    - by AZHAR
    Hi i am new to the system programming...i want to read information from fat table like total sector,total physical drive,containing logical drive types and as well containing files in the drives(like their file size,when accessed)with all information of files..and then display of these files with respect of their hierarchy at GUI. this is easy if we use System.IO namespace.but it is restricted for me.so please help

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >