Search Results

Search found 72103 results on 2885 pages for 'file storage'.

Page 333/2885 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Retrieving Encrypted Rich Text file and showing it in a RichTextBox C#

    - by Ranhiru
    OK, my need here is to save whatever typed in the rich text box to a file, encrypted, and also retrieve the text from the file again and show it back on the rich textbox. Here is my save code. private void cmdSave_Click(object sender, EventArgs e) { FileStream fs = new FileStream(filePath, FileMode.Create, FileAccess.Write); AesCryptoServiceProvider aes = new AesCryptoServiceProvider(); aes.GenerateIV(); aes.GenerateKey(); aes.Mode = CipherMode.CBC; TextWriter twKey = new StreamWriter("key"); twKey.Write(ASCIIEncoding.ASCII.GetString(aes.Key)); twKey.Close(); TextWriter twIV = new StreamWriter("IV"); twIV.Write(ASCIIEncoding.ASCII.GetString(aes.IV)); twIV.Close(); ICryptoTransform aesEncrypt = aes.CreateEncryptor(); CryptoStream cryptoStream = new CryptoStream(fs, aesEncrypt, CryptoStreamMode.Write); richTextBox1.SaveFile(cryptoStream, RichTextBoxStreamType.RichText); } I know the security consequences of saving the key and iv in a file but this just for testing :) Well, the saving part works fine which means no exceptions... The file is created in filePath and the key and IV files are created fine too... OK now for retrieving part where I am stuck :S private void cmdOpen_Click(object sender, EventArgs e) { OpenFileDialog openFile = new OpenFileDialog(); openFile.ShowDialog(); FileStream openRTF = new FileStream(openFile.FileName, FileMode.Open, FileAccess.Read); AesCryptoServiceProvider aes = new AesCryptoServiceProvider(); TextReader trKey = new StreamReader("key"); byte[] AesKey = ASCIIEncoding.ASCII.GetBytes(trKey.ReadLine()); TextReader trIV = new StreamReader("IV"); byte[] AesIV = ASCIIEncoding.ASCII.GetBytes(trIV.ReadLine()); aes.Key = AesKey; aes.IV = AesIV; ICryptoTransform aesDecrypt = aes.CreateDecryptor(); CryptoStream cryptoStream = new CryptoStream(openRTF, aesDecrypt, CryptoStreamMode.Read); StreamReader fx = new StreamReader(cryptoStream); richTextBox1.Rtf = fx.ReadToEnd(); //richTextBox1.LoadFile(fx.BaseStream, RichTextBoxStreamType.RichText); } But the richTextBox1.Rtf = fx.ReadToEnd(); throws an cryptographic exception "Padding is invalid and cannot be removed." while richTextBox1.LoadFile(fx.BaseStream, RichTextBoxStreamType.RichText); throws an NotSupportedException "Stream does not support seeking." Any suggestions on what i can do to load the data from the encrypted file and show it in the rich text box?

    Read the article

  • Android app crashes when I change the default xml layout file to another

    - by mib1413456
    I am currently just starting to learn android development and have created a basic "Hello world" app that uses "activity_main.xml" for the default layout. I tried to create a new layout xml file called "new_layout.xml" with a text view, a text field and a button and did the following changes in the MainActivity.java file: setContentView(R.layout.new_layout); I did nothing else expect for adding a new_layout.xml in the res/layout folder, I have tried restarting and cleaning the project but nothing. Below is my activity_main.xml file, new_layout.xml file and MainActivity.java activity_main.xml: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="org.example.androidsdk.demo.MainActivity" tools:ignore="MergeRootFrame" /> new_layout.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" > <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TextView" /> <EditText android:id="@+id/editText1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:ems="10" > <requestFocus /> </EditText> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" /> MainActivity.java file package org.example.androidsdk.demo; import android.app.Activity; import android.app.ActionBar; import android.app.Fragment; import android.os.Bundle; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.os.Build; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.new_layout); if (savedInstanceState == null) { getFragmentManager().beginTransaction() .add(R.id.container, new PlaceholderFragment()) .commit(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } /** * A placeholder fragment containing a simple view. */ public static class PlaceholderFragment extends Fragment { public PlaceholderFragment() { } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_main, container, false); return rootView; } } }

    Read the article

  • Save File to Sharepoint Server using JAX-WS

    - by Evan Porter
    I'm trying to save a file to a Sharepoint server using JAX-WS. The web service call is reporting a success, but the file doesn't show up. I used this command (from a WinXP) to generate the Java code to make the JAX-WS call: wsimport -keep -extension -Xnocompile http://hostname/sites/teamname/_vti_bin/Copy.asmx?WSDL I get a handle on the web service which I called port using the following: CopySoap port = null; if (userName != null && password != null) { Copy service = new Copy(); port = service.getCopySoap(); ((BindingProvider) port).getRequestContext().put(BindingProvider.USERNAME_PROPERTY, userName); ((BindingProvider) port).getRequestContext().put(BindingProvider.PASSWORD_PROPERTY, password); } else { throw new Exception("Holy Frijolé! Null userName and/or password!"); } I called the web service using the following: port.copyIntoItems(sourceUrl, destUrlCollection, fields , "Contents of the file".getBytes(), copyIntoItemsResult, copyResultCollection) The sourceUrl and the only url in destUrlCollection equals "hostname/sites/teamname/Tech Docs/Sub Folder". The FieldInformationCollection object named fields contains only one FieldInformation. The FieldInformation object has "HelloWorld.txt" as the value for displayName, internalName and value. The type property is set to FieldType.FILE. The id property is set to (java.util.UUID.randomUUID()).toString(). The call to copyIntoItems returns successfuly; copyIntoItemsResult contains a value of 0 and the only CopyResult object set in copyResultCollection has an error code of "SUCCESS" with a null error message. When I look into the "Tech Docs" library on Sharepoint, in the "Sub Folder" there's no file there. Why wouldn't it tell me what I did wrong? Did I just miss a step? Update (Feb 26th, 2011) I've changed my FieldInformation object's displayName and internalName properties to be "Title" as suggested. Still no joy, but a step in the right direction. After playing around with the urls for a bit, I got these results: With both the sourceUrl and the only destination URL equivalent, with no protocol, I get the SUCCESS response but no actual document appears in the document library. With both of the URLs equivalent but with an "http://" protocol specified, I get an UNKNOWN error with "Object reference not set to an instance of an object." as the message. With the source URL an empty string or null, I get an UNKNOWN error with " Value does not fall within the expected range." as the error message.

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

  • AccessControlException: access denied - caller function failed to load properties file

    - by Michael Mao
    Hi all: I am having a jar archive environment which is gonna call my class in a folder like this: java -jar "emarket.jar" ../tournament 100 My compiled class is deployed into the ../tournament folder, this command runs well. After I changed my code to load a properties file, it gets the following exception message: Exception in thread "main" java.security.AccessControlException: access denied (java.io.FilePermission agent.properties read) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkRead(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at Agent10479475.getPropertiesFromConfigFile(Agent10479475.java:110) at Agent10479475.<init>(Agent10479475.java:100) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.Class.newInstance0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at emarket.client.EmarketSandbox.instantiateClientObjects(EmarketSandbox.java:92) at emarket.client.EmarketSandbox.<init>(EmarketSandbox.java:27) at emarket.client.EmarketSandbox.main(EmarketSandbox.java:166) I am wondering why this security checking will fail. I issue the getPropertitiesFromConfigFile() function inside my class's default constructor, like this: public class Agent10479475 extends AbstractClientAgent { //default constructor public Agent10479475() { //set all properties to their default values in constructor FT_THRESHOLD = 400; FT_THRESHOLD_MARGIN = 50; printOut("Now loading properties from a config file...", ""); getPropertiesFromConfigFile(); printOut("Finished loading",""); } private void getPropertiesFromConfigFile() { Properties props = new Properties(); try { props.load(new FileInputStream("agent.properties")); FT_THRESHOLD = Long.parseLong(props.getProperty("FT_THRESHOLD")); FT_THRESHOLD_MARGIN = Long.parseLong(props.getProperty("FT_THRESHOLD_MARGIN ")); } catch(java.io.FileNotFoundException fnfex) { printOut("CANNOT FIND PROPERTIES FILE :", fnfex); } catch(java.io.IOException ioex) { printOut("IOEXCEPTION OCCURED :", ioex); } } } My class is loading its own .properties file under the same folder. why would the Java environment complains about such a denial of access? Must I config the emarket.client.EmarketSandbox class, which is not written by me and stored inside the emarket.jar, to access my agent.properties file? Any hints or suggestions is much appreciated. Many thanks in advance.

    Read the article

  • Joomla - Force File Download / CSV Export

    - by lautaro.dragan
    I'm in need of help... this is my first time asking a question in SO, so please be kind :) I'm trying to force-download a file from php, so when the user hits a certain button, he gets a file download. The file is a csv (email, username) of all registered users. I decided to add this button to the admin users screen, as you can see in this screenshot. So I added the following code to the addToolbar function in administrator/components/com_users/views/users/view.html.php: JToolBarHelper::custom('users.export', 'export.png', 'export_f2.png', 'Exportar', false); This button is mapped to the following function in the com_users\controller\users.php controller: public function exportAllUsers() { ob_end_clean(); $app = JFactory::getApplication(); header("Content-type: text/csv"); header("Content-Disposition: attachment; filename=ideary_users.csv"); header("Pragma: no-cache"); header("Expires: 0"); echo "email,name\n"; $model = $this->getModel("Users"); $users = $model->getAllUsers(); foreach ($users as $user) { echo $user->email . ", " . ucwords(trim($user->name)) . "\r\n"; } $app->close(); } Now, this is actually working perfectly fine. The issue here is that after I download a file, if I hit any button in the admin that causes a POST, instead of it performing the action it should, it just downloads the file over again! For example: I hit the "Export" button "users.csv" downloads Then, I hit the "search" button "users.csv" downloads... what the hell? I'm guessing that when I hit the export button, a JS gets called and sets a form's action attribute to an URL... and expects a response or something, and then other button's are prevented from re-setting the form's action attribute. I can't think of any real solution for this, but I'd rather avoid hacks if possible. So, what would be the standard, elegant solution that joomla offers in this case?

    Read the article

  • Oracle Virtual Networking Partner Sales Playbook Now Available

    - by Cinzia Mascanzoni
    Oracle Virtual Networking Partner Sales Playbook now available to partners registered in OPN Server and Storage Systems Knowledge Zones. Equips you to sell, identify and qualify opportunities, pursue specific sales plays, and deliver competitive differentiation. Find out where you should plan to focus your resources, and how to broaden your offerings by leveraging the OPN Specialized enablement available to your organization. Playbook is accessible to member partners through the following Knowledge Zones: Sun x86 Servers, Sun Blade Servers, SPARC T-Series Servers, SPARC Enterprise High-End M-Series Servers, SPARC Enterprise Entry-Level and Midrange M-Series Servers, Oracle Desktop Virtualization, NAS Storage, SAN Storage, Sun Flash Storage, StorageTek Tape Storage.

    Read the article

  • New DataCenter Options for Windows Azure

    - by ScottKlein
    Effective immediately, new compute and storage resource options are now available when selecting data center options in the Windows Azure Portal. "West US" and "East US" options are now available, for Compute and Storage. SQL Azure options for these two data centers will be available in the next few months. The official announcement can be found here.In terms of geo-replication:US East and West are paired together for Windows Azure Storage geo-replicationUS North and South are paired together for Windows Azure Storage geo-replicationThese two new data centers are now visible in the Windows Azure Management Portal effective immediately. Compute and Storage pricing remains the same across all data centers. Get started with Windows Azure through the free 90 day trial.

    Read the article

  • PHP uploads file - enctype="multipart/form-data" issue

    - by user147685
    Hi all, I have this upload code. there are no problem running it individually, but when i try to add into my other codes, it did not get the $_files parameter. Im guessing it was becoz of enctype="multipart/form-data" in the form tag, based on this post: http://stackoverflow.com/questions/1695246/why-file-upload-didnt-work-without-enctype the enctype is needed. SO my problem is, how can i do upload files without concern to this? can we juz change the code structure so that it will be compatible with other codes? if($_POST['check']){ $faillampiran=$_POST['faillampiran']; $file=$_FILES['faillampiran']["name"]; $fileSize = $_FILES['faillampiran']['size']; $fileType = $_FILES['faillampiran']['type']; if ($_FILES["faillampiran"]["error"] > 0 ) { echo "Return Code: " . $_FILES["faillampiran"]["error"] . "<br />"; } else { move_uploaded_file($_FILES["faillampiran"]["tmp_name"],"upload/" . $_FILES["faillampiran"]["name"]); echo '<table align = "center">'; echo "<tr><td>"; echo "Your file has been successfully stored."; echo "</td></tr>"; echo '</table>'; } } ?> <form method="post" name="form1" id="form1" enctype="multipart/form-data"> <tr><td></td><td><input type="hidden" name="MAX_FILE_SIZE" value=""> </td> </tr> <tr><td> Please choose a file</td><td>:</td></tr> <tr> <input type="file" size="50" name="faillampiran" alt="faillampiran" id="faillampiran" 1value= "<?=$faillampiran;?>" /> <tr align = "center"><td colspan = "3"><input type="submit" value="Hantar" name="check"/></td></tr> </tr></form> thank you.

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How to play small sound file continuously in Silverlight?

    - by ash
    Hello, I have two questions regarding Silverlight's SoundPlay action and properties. My scenario is like: I have two story board: The first story board has an image and a sound file; when the silverlight application gets loaded, the sound starts to play automatically, but if someone clicks the image, the sound file will stop and the second storyboard will start with a new sound file. 1) My first question is how to stop the first sound file of first story board when the second story board starts with the second sound file. 2) My second question is how to play a sound file continuously; for example, in Silverlight we can play a story board continuously with RepeatBehavior="Forever"; but I cannot find a way to play my 10 second sound file forever or continuously. Note: I have attached a small XAML file to show what I am talking about; I am also stating that if instead of an image file, if there were a button, then I can stop the first music file after I click the button and start my second story board with a new sound file, but I would like to use image file instead of a button. Is it possible? If it is, how to do it? Therefore, please answer my following two questions or give big hint or website tutorial links on 1) How to stop the first sound file of first story board when the second story board starts with the second sound file ( When the clickable element is an image instead of a button) 2) How to play a 10 second sound file continuously? ............Code Snippet...................... XAML ............ <Grid x:Name="LayoutRoot" Background="Red"> <Button HorizontalAlignment="Left" Margin="212,0,0,111" VerticalAlignment="Bottom" Width="75" Content="Button" Click="onClick"/> <MediaElement x:Name="sound2_mp3" Height="0" HorizontalAlignment="Left" Margin="105,230,0,0" VerticalAlignment="Top" Width="0" Source="/sound2.mp3" Stretch="Fill"/> <MediaElement x:Name="sound1_mp1" Height="0" HorizontalAlignment="Left" Margin="190,164,0,0" VerticalAlignment="Top" Width="0" Source="/sound1.mp3" Stretch="Fill" AutoPlay="False"/> </Grid> ..................................................................................................................................................................................................................... using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace testPrj { public partial class MainPage : UserControl { public MainPage() { // Required to initialize variables InitializeComponent(); } private void onClick(object sender, System.Windows.RoutedEventArgs e) { Storyboard1.Stop(); sound2_mp3.Stop(); sound1_mp1.Play(); } } } ...................................................................................................

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • Binary file reading problem

    - by ScReYm0
    Ok i have problem with my code for reading binary file... First i will show you my writing code: void book_saving(char *file_name, struct BOOK *current) { FILE *out; BOOK buf; out = fopen(file_name, "wb"); if(out != NULL) { printf_s("Writting to file..."); do { if(current != NULL) { strcpy(buf.catalog_number, current->catalog_number); strcpy(buf.author, current->author); buf.price = current->price; strcpy(buf.publisher, current->publisher); strcpy(buf.title, current->title); buf.price = current->year_published; fwrite(&buf, sizeof(BOOK), 1, out); } current = current->next; }while(current != NULL); printf_s("Done!\n"); fclose(out); } } and here is my "version" for reading it back: int book_open(struct BOOK *current, char *file_name) { FILE *in; BOOK buf; BOOK *vnext; int count; int i; in = fopen("west", "rb"); printf_s("Reading database from %s...", file_name); if(!in) { printf_s("\nERROR!"); return 1; } i = fread(&buf,sizeof(BOOK), 1, in); while(!feof(in)) { if(current != NULL) { current = malloc(sizeof(BOOK)); current->next = NULL; } strcpy(current->catalog_number, buf.catalog_number); strcpy(current->title, buf.title); strcpy(current->publisher, buf.publisher); current->price = buf.price; current->year_published = buf.year_published; fread(&buf, 1, sizeof(BOOK), in); while(current->next != NULL) current = current->next; fclose(in); } printf_s("Done!"); return 0; } I just need to save my linked list in binary file and to be able to read it back ... please help me. The program just don't read it or its crash every time different situation ...

    Read the article

  • How do I make the directories in a zip file relative to the target directory instead of my working directory

    - by Nathan
    I'm calling the zip command from a script where I cannot change directory. I need to make a zip file of the stuff in data/kit123/ from the directory which data resides in, but I want the contents of the zip to only be the contents of kit123, with paths relative to kit123. This is the directory structure myworkingdir data kit123 kitpart1 file.xcf anotherfile.xcf kitpart2 ... kit124 ... My script runs in myworkingdir and cannot change directories. If I call zip -r kit123.zip data/kit123 then the structure in the zip file will be data kit123 kitpart1 file.xcf anotherfile.xcf kitpart2 but I want it to be kit123 kitpart1 file.xcf anotherfile.xcf kitpart2 Is there a zip option I can use to accomplish this? It seems odd that it should depend on my working directory I know it's not -j. that one destroys the structure within kit123

    Read the article

  • Server Error Message: No File Access

    - by iMayne
    Hello. Im having an issues but dont know where to solve it. My template works great in xampp but not on the host server. I get this message: Warning: file_get_contents() [function.file-get-contents]: URL file-access is disables in the server configuration in homepage/......./twitter.php. The error is on line 64. <?php /* For use in the "Parse Twitter Feeds" code below */ define("SECOND", 1); define("MINUTE", 60 * SECOND); define("HOUR", 60 * MINUTE); define("DAY", 24 * HOUR); define("MONTH", 30 * DAY); function relativeTime($time) { $delta = time() - $time; if ($delta < 2 * MINUTE) { return "1 min ago"; } if ($delta < 45 * MINUTE) { return floor($delta / MINUTE) . " min ago"; } if ($delta < 90 * MINUTE) { return "1 hour ago"; } if ($delta < 24 * HOUR) { return floor($delta / HOUR) . " hours ago"; } if ($delta < 48 * HOUR) { return "yesterday"; } if ($delta < 30 * DAY) { return floor($delta / DAY) . " days ago"; } if ($delta < 12 * MONTH) { $months = floor($delta / DAY / 30); return $months <= 1 ? "1 month ago" : $months . " months ago"; } else { $years = floor($delta / DAY / 365); return $years <= 1 ? "1 year ago" : $years . " years ago"; } } /* Parse Twitter Feeds */ function parse_cache_feed($usernames, $limit, $type) { $username_for_feed = str_replace(" ", "+OR+from%3A", $usernames); $feed = "http://twitter.com/statuses/user_timeline.atom?screen_name=" . $username_for_feed . "&count=" . $limit; $usernames_for_file = str_replace(" ", "-", $usernames); $cache_file = dirname(__FILE__).'/cache/' . $usernames_for_file . '-twitter-cache-' . $type; if (file_exists($cache_file)) { $last = filemtime($cache_file); } $now = time(); $interval = 600; // ten minutes // check the cache file if ( !$last || (( $now - $last ) > $interval) ) { // cache file doesn't exist, or is old, so refresh it $cache_rss = file_get_contents($feed); (this is line 64) Any help on how to give this access on my host server?

    Read the article

  • How to change icons of specific file types on Ubuntu 11.10?

    - by Curious Apprentice
    I want to change file icons of some specific file types like- .html, .css etc. I have tried using "File Type Editor (assogiate)" which is not working. I have also tried using "Gnome Tweak Tool" using icon themes. But that also does not worked properly (Though I can change folder icons , dash menu icons but not file icons). Please suggest me a way so that I can change file icons properly. I have read some of the articles saying about some mime type changes. I could not get proper guide from any of those articles. If there is such a way then please write in detail. Many Many Thanks in Advance :)

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Is there a way to recover a file that I have deleted but is still open somewhere?

    - by George Edison
    This question is related to How to recover deleted files? but it is slightly different in nature. Suppose I have a file named ~/something open in a text editor. Further suppose that I open a terminal and run the following command while the file is still open in the text editor: rm ~/something This will delete the file. Now suppose that I changed my mind and wanted to get the file back. The file is still open in the text editor, so it hasn't been removed from the disk or filesystem yet. Is there any way to recover it?

    Read the article

  • Will the app file be sync from dmgr side after we remove it from installedApps path in Websphere?

    - by wing2ofsky
    do you know whether the app file will be sync from dmgr side and regenerated after we remove it from installedApps path? i've got one issue recently from customer. That is, they uploaded one image file into WASNode installedApps app path manually. Afterwards, they removed that file manually again from installedApps app path. But after restarting the Application server process, that file has been regenerated under same installedApps path. So i suspect that file maybe has been resync from dmgr node, like app file under applications folder. However, first of all, i don't see that image file within application ear file from DMGR applications folder. Moreover, i made a test myself, if i deleted file from installedApps app path, that file never be regenerated any more even though the node sync completed. So does anybody know why? Thanks in advance

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • Use matching value of a RegExp to name the output file.

    - by fx42
    I have this file "file.txt" which I want to split into many smaller ones. Each line of the file has an id field which looks like "id:1" for a line belonging to id 1. For each id in the file, I like to create a file named idid.txt and put all lines that belong to this id in that file. My brute force bash script solution reads as follows. count=1 while [ $count -lt 19945 ] do cat file.txt | grep "id:$count " >> ./sets/id$count.txt count='expr $count + 1' done Now this is very inefficient as I have do read through the file about 20.000 times. Is there a way to do the same operation with only one pass through the file? - What I'm probably asking for is a way to use the value that matches for a regular expression to name the associated output file.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >