Search Results

Search found 75611 results on 3025 pages for 'copy file'.

Page 37/3025 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Slow WLAN file transfer between server and tablet

    - by user266985
    My file server is running Ubuntu 12.04 and I'm sharing files from it over samba. It is connected via gigabit ethernet. My desktop, running Windows 8.1, is also connected via gigabit ethernet. I can transfer files between the two and completely saturate that gigabit pipe. However, I just got a Surface Pro 2, and I'm trying to stream HD movies from my server to the device over WiFi. For some reason, I can't break much past 1.5MB/s transferring files over the network. I've tried streaming through XBMC and a standard file copy; no difference. To add the confusion, if I connect to my guest network and then use my VPN server (installed on the router) to access the file server, I get around 3.2MB/s. I've been running diagnostics to determine the root and I think I've found it but I have no idea what is causing it or how to fix it. Router: Asus RT-N66U Surface Pro 2 Network Card: Marvell Avastar 350N (Driver 19/09/2013 v14.69.24044.150) InSSIDer: Link Score: 100 Co-Channels: 0 Overlapping: 0 5GHz Network Channel: 48+44 iperf File Server as Server; Surface Pro 2 as Client - TCP Performance: Acceptable ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.90 port 5001 connected with 192.168.0.56 port 57367 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 10.1 MBytes 84.7 Mbits/sec [ 4] 1.0- 2.0 sec 10.4 MBytes 87.6 Mbits/sec [ 4] 2.0- 3.0 sec 10.6 MBytes 88.8 Mbits/sec [ 4] 3.0- 4.0 sec 10.7 MBytes 89.5 Mbits/sec [ 4] 4.0- 5.0 sec 10.1 MBytes 84.4 Mbits/sec [ 4] 5.0- 6.0 sec 10.2 MBytes 85.8 Mbits/sec [ 4] 6.0- 7.0 sec 7.04 MBytes 59.1 Mbits/sec [ 4] 7.0- 8.0 sec 10.8 MBytes 90.2 Mbits/sec [ 4] 8.0- 9.0 sec 10.6 MBytes 89.1 Mbits/sec [ 4] 9.0-10.0 sec 8.62 MBytes 72.3 Mbits/sec [ 4] 0.0-10.0 sec 99.2 MBytes 83.1 Mbits/sec iperf Surface Pro 2 as Server, File Server as Client Performance: Poor ------------------------------------------------------------ Client connecting to 192.168.0.56, TCP port 5001 TCP window size: 22.9 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.90 port 40233 connected with 192.168.0.56 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.25 MBytes 10.5 Mbits/sec [ 3] 4.0- 5.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.38 MBytes 11.5 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 0.0-10.1 sec 15.0 MBytes 12.4 Mbits/sec For some reason, it gets capped and I haven't got a clue why. Any suggestions? Edit: My link speed is reported as 270Mbps by Windows. I'm less than two metres from the router with a clear line of sight.

    Read the article

  • File.Exists() returns false, but not in debug

    - by Tor Haugen
    I'm being completely confused here folks, My code throws an exception because File.Exists() returns false public override sealed TCargo ReadFile(string fileName) { if (!File.Exists(fileName)) { throw new ArgumentException("Provided file name does not exist", "fileName"); } Visual studio breaks at the throw statement, and I immediately check the value of File.Exists(fileName) in the immediate window. It returns true. When I drag the breakpoint back up to the if statement and execute it again, it throws again. fileName is an absolute path to a file. I'm not creating the file, nor writing to it (it's there all along). If I paste the path into the open dialog in Notepad, it reads the file without problems. The code is executing in a background worker. It's the only complicating factor I can think of. I am positive the file has not been opened already, either in the worker thread or elsewhere. What's going on here?

    Read the article

  • (C++) Loading a file into a vector

    - by Alden
    This is probably a simple question, however I am new to C++ and I cannot figure this out. I am trying to load a binary file and load each byte to a vector. This works fine with a small file, but when I try to read larger than 410 bytes the program crashes and says: This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. I am using code::blocks on windows. This is the code: #include <iostream> #include <fstream> #include <vector> using namespace std; int main() { std::vector<char> vec; std::ifstream file; file.exceptions( std::ifstream::badbit | std::ifstream::failbit | std::ifstream::eofbit); file.open("file.bin"); file.seekg(0, std::ios::end); std::streampos length(file.tellg()); if (length) { file.seekg(0, std::ios::beg); vec.resize(static_cast<std::size_t>(length)); file.read(&vec.front(), static_cast<std::size_t>(length)); } int firstChar = static_cast<unsigned char>(vec[0]); cout << firstChar <<endl; return 0; } Thank you for your help!

    Read the article

  • System.IO.File.ReadAllText(path) does not read the html file.

    - by Harikrishna
    I want to read the html file.And for that I use System.IO.File.ReadAllText(path).It can read all the html file but there is one file which is not read through this function. I have also used using (StreamReader reader = File.OpenText(fileName)) { text = reader.ReadToEnd(); But still there is same problem. What is the reason can be there ? And for that what can be the solution ? Or any other way to read the file ?

    Read the article

  • Copy a multi-dimentional array by Value (not by reference) in PHP.

    - by Simon R
    Language: PHP I have a form which asks users for their educational details, course details and technical details. When the form is submitted the page goes to a different page to run processes on the information and save parts to a database. HOWEVER(!) I then need to return the page back to the original page, where having access to the original post information is needed. I thought it would be simple to copy (by value) the multi-dimensional (md) $_POST array to $_SESSION['post'] session_start(); $_SESSION['post'] = $_POST; However this only appears to place the top level of the array into $_SESSION['post'] not doing anything with the children/sub-arrays. An abbreviated form of the md $_POST array is as follows: Array ( [formid] = 2 [edu] = Array ( ['id'] = Array ( [1] = new_1 [2] = new_2 ) ['nameOfInstitution'] = Array ( [1] = 1 [2] = 2 ) ['qualification'] = Array ( [1] = blah [2] = blah ) ['grade'] = Array ( [1] = blah [2] = blah ) ) [vID] = 61 [Submit] = Save and Continue ) If I echo $_SESSION['post']['formid'] it writes "2", and if I echo $_SESSION['post']['edu'] it returns "Array". If I check that edu is an array (is_array($_SESSION['post']['edu])) it returns true. If I echo $_SESSION['post']['edu']['id'] it returns array, but when checked (is_array($_SESSION['post']['edu]['id'])) it returns false and I cannot echo out any of the elements. How do I successfully copy (by value, not by reference) the whole array (including its children) to an new array?

    Read the article

  • Is there a good way to copy a Gtk widget?

    - by Jake
    Is there a way, using the Gtk library in C, to clone a Gtk button (for instance), and pack it somewhere else in the app. I know you can't pack the same widget twice. And that this code obviously wouldn't work, but shows what happens when I attempt a shallow copy of the button: GtkButton *a = g_object_new(GTK_TYPE_BUTTON, "label", "o_0", NULL); GtkButton *b = g_memdup(b, sizeof *b); gtk_box_pack_start_defaults(GTK_BOX(vbox), GTK_WIDGET(b)); There is surrounding code which creates a vbox and packs it in a window and runs gtk_main(). This will result in these hard to understand error messages: (main:6044): Gtk-CRITICAL **: gtk_widget_hide: assertion `GTK_IS_WIDGET (widget)' failed (main:6044): Gtk-CRITICAL **: gtk_widget_realize: assertion `GTK_WIDGET_ANCHORED (widget) || GTK_IS_INVISIBLE (widget)' failed ** Gtk:ERROR:/build/buildd/gtk+2.0-2.18.3/gtk/gtkwidget.c:8431:gtk_widget_real_map: assertion failed: (GTK_WIDGET_REALIZED (widget)) Along the same lines, if I were to write my own GObject (not necessarily a Gtk widget), is there a good way to write a copy constructor. Im thinking it should be an interface with optional hooks and based mostly on the properties, handling the class's hierarchy in some way. I'd want to do this: GtkButton *b = copyable_copy(COPYABLE(a)); If GtkButton could use a theoretical copyable interface.

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • C Programming. How to deep copy a struct?

    - by user69514
    I have the following two structs where "child struct" has a "rusage struct" as an element. Then I create two structs of type "child" let's call them childA and childB How do I copy just the rusage struct from childA to childB? typedef struct{ int numb; char *name; pid_t pid; long userT; long systemT; struct rusage usage; }child; typedef struct{ struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims */ long ru_majflt; /* page faults */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* messages sent */ long ru_msgrcv; /* messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */ }rusage; I did the following, but I guess it copies the memory location, because if I changed the value of usage in childA, it also changes in childB. memcpy(&childA,&childB, sizeof(rusage)); I know that gives childB all the values from childA. I have already taken care of the others fields in childB, I just need to be able to copy the rusage struct called usage that resides in the "child" struct.

    Read the article

  • Shallow Copy vs DeepCopy in C#.NET

    Hope below example helps to understand the difference. Please drop a comment if any doubts. using System; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace ShallowCopyVsDeepCopy {     class Program     {         static void Main(string[] args)         {             var e1 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e2 = e1.ShallowClone();             e1.Department.DeptName = "Accounts";             Console.WriteLine(e2.Department.DeptName);             var e3 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e4 = e3.DeepClone();             e3.Department.DeptName = "Accounts";             Console.WriteLine(e4.Department.DeptName);         }     }     [Serializable]     class Dep     {         public int DeptNo { get; set; }         public String DeptName { get; set; }     }     [Serializable]     class Emp     {         public int EmpNo { get; set; }         public String EmpName { get; set; }         public Dep Department { get; set; }         public Emp ShallowClone()         {             return (Emp)this.MemberwiseClone();         }         public Emp DeepClone()         {             MemoryStream ms = new MemoryStream();             BinaryFormatter bf = new BinaryFormatter();             bf.Serialize(ms, this);             ms.Seek(0, SeekOrigin.Begin);             object copy = bf.Deserialize(ms);             ms.Close();             return copy as Emp;         }     } } span.fullpost {display:none;}

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • Remove urls to unidex blog content from google, then copy blogs content to new blog [closed]

    - by sam
    Possible Duplicate: migrating PR / rankings from one site to another Ive been writing a blog for the past yr or so, with about 300 published articles, the blog have been running under a subdomain blog.mysite.com We are no looking to change the url of our site, so the blog is going to have to be ported over to a subdoamin on the new site. We would really like to keep the backlog / archive of all the articles we have written but dont wont to be penalized for having duplicate content, could we just remove / unindex the urls from google in webmaster tools then export the blog and import it back to our new blog ? Would google still see this a duplicate content or becuase ive removed the urls have they no longer got a copy of it ? thanks

    Read the article

  • Copying & Pasting Rows Between Grids in SQL Developer

    - by thatjeffsmith
    Apologies for slacking on the blogging front here lately. Still mentally hung over from Open World, and lots of things going on behind the scenes here in Oracle-land. Whilst (love that word) blogging is part of my job, it’s not the ONLY part of my job So a super-quick and dirty ‘trick’ this morning. Copying Query Result Record as New Row in Table Copy and paste is something everyone ‘gets.’ I don’t know we have to thank for that, whether it’s Microsoft or Xerox, but it’s been ingrained in our way of dealing with all things computers. Almost to the detriment of some of our users – they’ll use Copy and Paste when perhaps our Export feature is superior, but I digress. Where it does work just fine is when you want to create a new row in your table that matches a row you have retrieved from an executed query. Just click in the gutter or row number to get the entire row selected Once you have your data selected, do your thing, i.e. ctrl+C or Command/Apple+C or whatever. Now open your view or table editor, go to the data page, and ask for a new row. New record, no data Paste in the data from the clipboard. It’s smart enough to paste the separate values out to the separate columns. The clipboard saves the day, again. If your columns orders are different, just change the order in the grids. If you have extra information, don’t copy the entire row. I know, I know – Jeff this is too simple, why are you wasting our time here? It seems intuitive, but how many of you actually tried this before reading it just now? I seem to get more positive feedback from the very basic user interface 101 tips than the esoteric click-click-click-ctrl-shift-click tricks I prefer to post. Lots of interesting stuff on tap, so stay tuned!

    Read the article

  • file association in Windows 8

    - by Robith Nuriel Haq
    Associating a file to a program should be easy in Windows. However, i find it rather difficult when I'm working on Windows 8. associating a file to a desktop application that I install on my computer is easy because the whole operation is entirely the same with that in the previous Windows releases. What I find rather difficult is associating a file to a Metro apps that I download from the store. So far, I have been using Multimedia 8 (a metro app) to open my video files. However, this app cannot handle particular files-like *.dat-that can be associated easily to desktop video applications, such as media player classic and the like. When I try to associate my DAT files to Multimedia 8, there is indeed a "look for another app on this PC" option at the bottom of the "open with" pop up. But alas, I cannot figure out how to locate my Multimedia 8 app to which I want to associate my DAT file (as well as other video files that are not yet associated to this metro app). If anyone of you knows how to locate those metro apps, please tell me. many thanks

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Hard-copy approaches to time tracking

    - by STW
    I have a problem: I suck at tracking time-on-task for specific feature/defects/etc while coding them. I tend to jump between tasks a fair bit (partly due to the inherit juggling required by professional software development, partly due to my personal tendancy to focus on the code itself and not the business process around code). My personal preference is for a hard-copy system. Even with gabillions of pixels of real-estate on-screen I find it terribly distracting to keep a tracking window convienient; either I forget about it or it gets in my ways. So, looking for suggestions on time-tracking. My only requirement is a simple system to track start/stop times per task. I've considered going as far as buying a time-clock and giving each ticket a dedicated time-card. When I start working on it, punch-in; when done working, punch-out.

    Read the article

  • A command-line clipboard copy and paste utility?

    - by Peter.O
    In Windows I used command-line clipboard copy-and-paste utilities... pclip.exe and gclip.exe These were UnixUtils ports for Windows (but they only handled plain text). There were a couple of other native Windows utils which could write/extracy any format. I've looked for something similar in Synaptic Package Manager, but I can't find anything. Is there something there, that I've missed? ... or maybe this is available in bash scripting? The type of utility I'd like will be able to read/write via std-in/std-out or file-in/file-out, and handle Unicode/Rich-text/Picture/etc clipboard formats... Late Edit: NB: I'm not after a clipboard manager.

    Read the article

  • Is it legal to ask photo ID and credit card copy in the U.S

    - by selim
    I regularly order from online shops around the world and I have not see any case where the company asks for photo ID or credit card copy. Yesterday I make an order from linode.com and my order is on hold because of their "fraud check system". Is it common to ask those info in U.S. where I have never asked such info in here (Istanbul, Turkey). And I already asked what is their motive and legal stand and the reason my order is hold by their fraud system. And I also added whether is because I live in Istanbul, Turkey. Their answer was as following: "We would not be able to disclose specific information related to our fraud system." And I'm asked repeatedly whether I want to cancel my order or not. I dont questioning reputation of linode.com if I think so, I did not make an order. I think asking for photo ID is neither legal nor provide any security.

    Read the article

  • Using a back-end mechanism to copy files to DB and notify the application

    - by BDotA
    This Scenario: User copies large files to a local folder. I want to watch that folder and when a new file is dropped then go and copy it to Database, so later when coping is done I can actually use it in my application. ( A C# WinForms App). It would be awesome to also find a way to somehow get notified in the Application that hey copying the file to DB is finished and ready for use... I am using C#.net, Windows... What solutions/architecture do you suggest for this? For example having a windows service running all the time watching that folder, when something copied goes and write it to DB ... then how about getting notified? MSMQ is something I can use? don't know much about it yet. Thanks.

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Moving Ubuntu to a new hdd

    - by jaurisan
    I have a 300gb hdd which I am currently using on my older PC. Now I want to have a copy of those 300GB into a new 1TB hdd (installed in a new computer). My "problem" is that the 1TB hdd already has a 50GB partition with a Win XP (the rest of the space is not partitioned). The 300GB disk has a 240GB partition for Ubuntu, and the rest is a FAT partition which I don't care if it gets copied or not to the new disk. So how can I transfer the entire Ubuntu to the new hard disk and still being able to boot the XP? Is there a way or tool that can help me do over LAN? So I wont have to take out the hdd from the new pc and put it in the older to do the copy.

    Read the article

  • Shallow Copy vs DeepCopy in C#.NET

    Hope below example helps to understand the difference. Please drop a comment if any doubts. using System; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace ShallowCopyVsDeepCopy {     class Program     {         static void Main(string[] args)         {             var e1 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e2 = e1.ShallowClone();             e1.Department.DeptName = "Accounts";             Console.WriteLine(e2.Department.DeptName);             var e3 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e4 = e3.DeepClone();             e3.Department.DeptName = "Accounts";             Console.WriteLine(e4.Department.DeptName);         }     }     [Serializable]     class Dep     {         public int DeptNo { get; set; }         public String DeptName { get; set; }     }     [Serializable]     class Emp     {         public int EmpNo { get; set; }         public String EmpName { get; set; }         public Dep Department { get; set; }         public Emp ShallowClone()         {             return (Emp)this.MemberwiseClone();         }         public Emp DeepClone()         {             MemoryStream ms = new MemoryStream();             BinaryFormatter bf = new BinaryFormatter();             bf.Serialize(ms, this);             ms.Seek(0, SeekOrigin.Begin);             object copy = bf.Deserialize(ms);             ms.Close();             return copy as Emp;         }     } } span.fullpost {display:none;}

    Read the article

  • Copy a Ubuntu install?

    - by Hailwood
    So short story is that I am wanting to grab my ubuntu install, back up to an external disk, delete everything from my hdd, repartition my hdd, put my ubuntu install back. Long story is. I finally got my recovery discs, so I do not need to recovery partitions, not the mention my hdd is kinda messy, So I am going to wipe it and put everything back. Thing is I don't want to have to re-install everything, I would rather just copy/paste my ubuntu install if possible. Oh, What I should also mention is that I will be dual booting windows!

    Read the article

  • Updated copy of the OBIEE Tuning whitepaper

    - by inowodwo
    The Product Assurance team have released an updated copy of the OBIEE Tuning Whitepaper. You can find it on the PA blog https://blogs.oracle.com/pa/entry/test or via Support note OBIEE 11g Infrastructure Performance Tuning Guide (Doc ID 1333049.1) https://support.us.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1333049.1&recomm=Y This new revised document contains following useful tuning items: 1.    New improved HTTP Server caching algorithm. 2.    Oracle iPlanet Web Server tuning parameters. 3.    New tuning parameters settings / values for OPIS/OBIS components.

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Referencing external javascript vs. hosting my own copy

    - by Mr. Jefferson
    Say I have a web app that uses jQuery. Is it better practice to host the necessary javascript files on my own servers along with my website files, or to reference them on jQuery's CDN (example: http://code.jquery.com/jquery-1.7.1.min.js)? I can see pros for both sides: If it's on my servers, that's one less external dependency; if jQuery went down or changed their hosting structure or something like that, then my app breaks. But I feel like that won't happen often; there must be lots of small-time sites doing this, and the jQuery team will want to avoid breaking them. If it's on my servers, that's one less external reference that someone could call a security issue If it's referenced externally, then I don't have to worry about the bandwidth to serve the files (though I know it's not that much). If it's referenced externally and I'm deploying this web site to lots of servers that need to have their own copies of all the files, then it's one less file I have to remember to copy/update.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >