Search Results

Search found 9235 results on 370 pages for 'disk cloning'.

Page 355/370 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • sudo apt-get install python.pip python-dev Gives Error

    - by user2539745
    I am learning Django from http://gettingstartedwithdjango.com/ and I have windows 7 32-bit. The tutorial asked to install virtualbox and vagrant(tutorial had precise64 and it had issues in my pc so I installed precise32) so I did it. Now the tutorial asked to do sudo apt-get install python-dev python.pip so I did it but it gave me this error > vagrant@precise32:~$ sudo apt-get install python.pip python-dev Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'python-pip' for regex 'python.pip' Note, selecting 'python-pipeline' for regex 'python.pip' The following extra packages will be installed: libexpat1 libexpat1-dev libpython2.7 python-pkg-resources python-setuptools python-support python2.7 python2.7-dev python2.7-minimal Suggested packages: python-distribute python-distribute-doc python2.7-doc binfmt-support The following NEW packages will be installed: libexpat1-dev libpython2.7 python-dev python-pip python-pipeline python-pkg-resources python-setuptools python-support python2.7-dev The following packages will be upgraded: libexpat1 python2.7 python2.7-minimal 3 upgraded, 9 newly installed, 0 to remove and 63 not upgraded. Need to get 34.7 MB/35.7 MB of archives. After this operation, 42.0 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7 i386 2.7 .3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7-minimal i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main libpython2.7 i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7-dev i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7-minimal_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/libpyt hon2.7_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7-dev_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-mis sing? Please help what should I do ??

    Read the article

  • .Net Custom Configuration Section and Saving Changes within PropertyGrid

    - by Paul
    If I load the My.Settings object (app.config) into a PropertyGrid, I am able to edit the property inside the propertygrid and the change is automatically saved. PropertyGrid1.SelectedObject = My.Settings I want to do the same with a Custom Configuration Section. Following this code example (from here http://www.codeproject.com/KB/vb/SerializePropertyGrid.aspx), he is doing explicit serialization to disk when a "Save" button is pushed. Public Class Form1 'Load AppSettings Dim _appSettings As New AppSettings() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click _appSettings = AppSettings.Load() ' Actually change the form size Me.Size = _appSettings.WindowSize PropertyGrid1.SelectedObject = _appSettings End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click _appSettings.Save() End Sub End Class In my code, my custom section Inherits from ConfigurationSection (see below) Question: Is there something built into ConfigurationSection class that does the autosave? If not, what is the best way to handle this, should it be in the PropertyGrid.PropertyValueChagned? (how does the My.Settings handle this internally?) Here is the example Custom Class that I am trying to get to auto-save and how I load into property grid. Dim config As System.Configuration.Configuration = _ ConfigurationManager.OpenExeConfiguration( _ ConfigurationUserLevel.None) PropertyGrid2.SelectedObject = config.GetSection("CustomSection") Public NotInheritable Class CustomSection Inherits ConfigurationSection ' The collection (property bag) that contains ' the section properties. Private Shared _Properties As ConfigurationPropertyCollection ' The FileName property. Private Shared _FileName As New ConfigurationProperty("fileName", GetType(String), "def.txt", ConfigurationPropertyOptions.IsRequired) ' The MasUsers property. Private Shared _MaxUsers _ As New ConfigurationProperty("maxUsers", _ GetType(Int32), 1000, _ ConfigurationPropertyOptions.None) ' The MaxIdleTime property. Private Shared _MaxIdleTime _ As New ConfigurationProperty("maxIdleTime", _ GetType(TimeSpan), TimeSpan.FromMinutes(5), _ ConfigurationPropertyOptions.IsRequired) ' CustomSection constructor. Public Sub New() _Properties = New ConfigurationPropertyCollection() _Properties.Add(_FileName) _Properties.Add(_MaxUsers) _Properties.Add(_MaxIdleTime) End Sub 'New ' This is a key customization. ' It returns the initialized property bag. Protected Overrides ReadOnly Property Properties() _ As ConfigurationPropertyCollection Get Return _Properties End Get End Property <StringValidator( _ InvalidCharacters:=" ~!@#$%^&*()[]{}/;'""|\", _ MinLength:=1, MaxLength:=60)> _ <EditorAttribute(GetType(System.Windows.Forms.Design.FileNameEditor), GetType(System.Drawing.Design.UITypeEditor))> _ Public Property FileName() As String Get Return CStr(Me("fileName")) End Get Set(ByVal value As String) Me("fileName") = value End Set End Property <LongValidator(MinValue:=1, _ MaxValue:=1000000, ExcludeRange:=False)> _ Public Property MaxUsers() As Int32 Get Return Fix(Me("maxUsers")) End Get Set(ByVal value As Int32) Me("maxUsers") = value End Set End Property <TimeSpanValidator(MinValueString:="0:0:30", _ MaxValueString:="5:00:0", ExcludeRange:=False)> _ Public Property MaxIdleTime() As TimeSpan Get Return CType(Me("maxIdleTime"), TimeSpan) End Get Set(ByVal value As TimeSpan) Me("maxIdleTime") = value End Set End Property End Class 'CustomSection

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • dojo.require() prevents Firefox from rendering the page

    - by Eduard Wirch
    Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script> ... Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window. After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading. If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File-Save Page As...) the html code will be truncated and the above part will look like this: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script></head><body></body></html> This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it. The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event: <script type="text/javascript"> window.load=function() { dojo.require("dojo.number"); } </script> But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me? EDIT: Dojo 1.3.1, no JS errors or warnings.

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • mciSendString cannot save to directory path

    - by robUK
    Hello, VS C# 2008 SP1 I have a created a small application that records and plays audio. However, my application needs to save the wave file to the application data directory on the users computer. The mciSendString takes a C style string as a parameter and has to be in 8.3 format. However, my problem is I can't get it to save. And what is strange is sometime it does and sometimes it doesn't. Howver, most of the time is failes. However, if I save directly to the C drive it works first time everything. I have used 3 different methods that I have coded below. The error number that I get when it fails is 286."The file was not saved. Make sure your system has sufficient disk space or has an intact network connection" Many thanks for any suggestins, [DllImport("winmm.dll",CharSet=CharSet.Auto)] private static extern uint mciSendString([MarshalAs(UnmanagedType.LPTStr)] string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); [DllImport("winmm.dll", CharSet = CharSet.Auto)] private static extern int mciGetErrorString(uint errorCode, StringBuilder errorText, int errorTextSize); [DllImport("Kernel32.dll", CharSet=CharSet.Auto)] private static extern int GetShortPathName([MarshalAs(UnmanagedType.LPTStr)] string longPath, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder shortPath, int length); // Stop recording private void StopRecording() { // Save recorded voice string shortPath = this.shortPathName(); string formatShortPath = string.Format("save recsound \"{0}\"", shortPath); uint result = 0; StringBuilder errorTest = new StringBuilder(256); // C:\DOCUME~1\Steve\APPLIC~1\Test.wav // Fails result = mciSendString(string.Format("{0}", formatShortPath), null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // command line convention - fails result = mciSendString("save recsound \"C:\\DOCUME~1\\Steve\\APPLIC~1\\Test.wav\"", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // 8.3 short format - fails result = mciSendString(@"save recsound C:\DOCUME~1\Steve\APPLIC~1\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // Save to C drive works everytime. result = mciSendString(@"save recsound C:\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); mciSendString("close recsound ", null, 0, IntPtr.Zero); } // Get the short path name so that the mciSendString can save the recorded wave file private string shortPathName() { string shortPath = string.Empty; long length = 0; StringBuilder buffer = new StringBuilder(256); // Get the length of the path length = GetShortPathName(this.saveRecordingPath, buffer, 256); shortPath = buffer.ToString(); return shortPath; }

    Read the article

  • Global Entity Framework Context in WPF Application

    - by OffApps Cory
    Good day, I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally? For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right? If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated. I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data. Update: OK so I have changed my app as follows: All contexts are created in Using Blocks to dispose of them after they are no longer needed. When loaded, all entities are detatched from context before it is disposed. A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method. After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null). This has let to the current problem: When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem. My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work. Any help would elevate you to near god-hood.

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • Problems doing asynch operations in C# using Mutex.

    - by firoso
    I've tried this MANY ways, here is the current iteration. I think I've just implemented this all wrong. What I'm trying to accomplish is to treat this Asynch result in such a way that until it returns AND I finish with my add-thumbnail call, I will not request another call to imageProvider.BeginGetImage. To Clarify, my question is two-fold. Why does what I'm doing never seem to halt at my Mutex.WaitOne() call, and what is the proper way to handle this scenario? /// <summary> /// re-creates a list of thumbnails from a list of TreeElementViewModels (directories) /// </summary> /// <param name="list">the list of TreeElementViewModels to process</param> public void BeginLayout(List<AiTreeElementViewModel> list) { // *removed code for canceling and cleanup from previous calls* // Starts the processing of all folders in parallel. Task.Factory.StartNew(() => { thumbnailRequests = Parallel.ForEach<AiTreeElementViewModel>(list, options, ProcessFolder); }); } /// <summary> /// Processes a folder for all of it's image paths and loads them from disk. /// </summary> /// <param name="element">the tree element to process</param> private void ProcessFolder(AiTreeElementViewModel element) { try { var images = ImageCrawler.GetImagePaths(element.Path); AsyncCallback callback = AddThumbnail; foreach (var image in images) { Console.WriteLine("Attempting Enter"); synchMutex.WaitOne(); Console.WriteLine("Entered"); var result = imageProvider.BeginGetImage(callback, image); } } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } /// <summary> /// Adds a thumbnail to the Browser /// </summary> /// <param name="result">an async result used for retrieving state data from the load task.</param> private void AddThumbnail(IAsyncResult result) { lock (Thumbnails) { try { Stream image = imageProvider.EndGetImage(result); string filename = imageProvider.GetImageName(result); string imagePath = imageProvider.GetImagePath(result); var imageviewmodel = new AiImageThumbnailViewModel(image, filename, imagePath); thumbnailHash[imagePath] = imageviewmodel; HostInvoke(() => Thumbnails.Add(imageviewmodel)); UpdateChildZoom(); //synchMutex.ReleaseMutex(); Console.WriteLine("Exited"); } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } }

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • error working with wsdl files in visual studio 2008

    - by deostroll
    Hi. I got a wsdl file in email. At first I didn't know how to use it. I've simply saved the file to my disk. Opened visual studio...added a service reference...provided path to file, and service was discovered. I opened the object browser to see the types and methods that got imported. I figure anything that ends with the name 'Client' is a good place to start using the web service. I've tried using a simple method to get data but it has run into and expception. Need help in resolving it. System.InvalidOperationException was unhandled Message="The XML element 'ListsRequest' from namespace 'http://www.asd.org/MGMMIRAGE.MDM.WS/Customer' references a method and a type. Change the method's message name using WebMethodAttribute or change the type's root element using the XmlRootAttribute." Source="System.Xml" StackTrace: at System.Xml.Serialization.XmlReflectionImporter.ReconcileAccessor(Accessor accessor, NameTable accessors) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel, XmlMappingAccess access) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.XmlSerializerImporter.ImportMembersMapping(XmlName elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean isEncoded, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.LoadBodyMapping(MessageDescription message, String mappingKey, MessagePartDescriptionCollection& rpcEncodedTypedMessageBodyParts) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.CreateMessageInfo(MessageDescription message, String key) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.get_Request() at System.ServiceModel.Description.XmlSerializerOperationBehavior.CreateFormatter() at System.ServiceModel.Description.XmlSerializerOperationBehavior.System.ServiceModel.Description.IOperationBehavior.ApplyClientBehavior(OperationDescription description, ClientOperation proxy) at System.ServiceModel.Description.DispatcherBuilder.BindOperations(ContractDescription contract, ClientRuntime proxy, DispatchRuntime dispatch) at System.ServiceModel.Description.DispatcherBuilder.ApplyClientBehavior(ServiceEndpoint serviceEndpoint, ClientRuntime clientRuntime) at System.ServiceModel.Description.DispatcherBuilder.BuildProxyBehavior(ServiceEndpoint serviceEndpoint, BindingParameterCollection& parameters) at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint) at System.ServiceModel.ChannelFactory.CreateFactory() at System.ServiceModel.ChannelFactory.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortType.CountryCodeGet(CountryCodeGetRequest request) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2983 at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.CountryCodeGet(String countryCode) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2989 at MDMWSDemo.Program.Main(String[] args) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Program.cs:line 15 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • R Install/Update on Mac OS X (Snow Leopard): where does R put files during install/config?

    - by doug
    In sum, there's a stray preference-like file or two (probably just one) that i can't find. Here's the whole story: I recently attempted to update my R install from 2.10 to 2.11. As i have done before, i installed from source. I know that all of the dependencies are correctly installed and made available to R, because my prior install worked fine. When i upgraded to 2.11, i am unable to install packages (no exception thrown, it just doesn't appear to complete the install and is unresponsive so i have to quit + restart R. Given i install from source, there are any number of points in the process that i could have messed up. What i need to do now is "start over" which requires that i clear out my my prior install. I am having trouble doing that. There is still at least one preference file or something like that i can't find and one of these is causing the problem, so i need to find it and terminate it with extreme prejudice before i do a fresh install. Although i set a number of flags during the install, i have never opted out of the default install locations during the config step. There has to be one or more preference files still in my file structure (and that's also accessible to the new install of R) because after i follow all of the steps below, then do a fresh install, when i start R for the first time, some of my preferences have persisted (e.g., quartz settings, GUI background color, editor selection, etc.). Again, the problem is that i just cannot locate those files. Finally, the problem can't be that during my last install from source, i inadvertently caused a preference file to be sent to an off-spec location--because again, a fresh R install (whether from source or from the OS X binaries) is finding those files Here's what i've done prior to attempting a clean install of R: Removed files from these locations: ~/.RData ~/.RHistory /Applications/R64.app /Applications/R.app /Library/Frameworks/R.framework (i also removed all symlinks from these) Cleared all RAM and disk caches, in particular the directory where i know R caches: ~/Library/Caches/R* (in fact i've cleared this entire directory) Checked for all 'hidden' files in the OS X directories where login/startup files are often placed: /etc/ ~/ In addition, i've checked R-help, and i've also read through the relevant portions of 'R Installation and Administration'--no luck. I've also searched searched my file structure using the various bash utilities, which nearly always solves problems of this sort quite easily, but in this case obviously searching by name or even pattern is more problematic.

    Read the article

  • Speed Problem with Wireless Connectivity on Cisco 877w

    - by Carl Crawley
    Having a bit of a weird one with my local LAN setup. I recently installed a Cisco 877W router on my DSL2+ connection and all is working really well.. Upgraded the IOS to 12.4 and my wired clients are streaming connectivity superfast at 1.3mb/s. However, there seems to be an issue with my wireless clients - I can't seem to stream any data across the local wireless connection (LAN) and using the Internet, whilst responsive enough isn't really comparable with the wired connection speed. For example, all devices are connected to an 8 Port Gb switch on FE0 from the Router with a NAS disk and on my wired clients, I can transfer/stream etc absolutely fine - however, transferring a local 700Mb file on my local LAN estimates 7-8 hours to transfer :( The Wireless config is as follows : interface Dot11Radio0 description WIRELESS INTERFACE no ip address ! encryption mode ciphers tkip ! ssid [MySSID] ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 channel 2462 station-role root rts threshold 2312 world-mode dot11d country GB indoor bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding All devices are connected to the Gb Switch which is connected to FE0 with the following: Hardware is Fast Ethernet, address is 0021.a03e.6519 (bia 0021.a03e.6519) Description: Uplink to Switch MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 14000 bits/sec, 19 packets/sec 5 minute output rate 167000 bits/sec, 23 packets/sec 177365 packets input, 52089562 bytes, 0 no buffer Received 919 broadcasts, 0 runts, 0 giants, 0 throttles 260 input errors, 260 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 156673 packets output, 106218222 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Not sure why I'm having problems on the wireless and I've reached the end of my Cisco knowledge... Thanks for any pointers! Carl

    Read the article

  • Start a short video when an incoming call is detected, first case using the emulator.

    - by Emanuel
    I want to be able to start a short video on an incoming phone call. The video will loop until the call is answered. I've loaded the video onto the emulator sdcard then created the appropriate level avd with a path to the sdcard.iso file on disk. Since I'm running on a Mac OS x snow leopard I am able to confirm the contents of the sdcard. All testing has be done on the Android emulator. In a separate project TestVideo I created an activity that just launches the video from the sdcard. That works as expected. Then I created another project TestIncoming that creates an activity that creates a PhoneStateListener that overrides the onCallStateChanged(int state, String incomingNumber) method. In the onCallStateChanged() method I check if state == TelephonyManager.CALL_STATE_RINGING. If true I create an Intent that starts the video. I'm actually using the code from the TestVideo project above. Here is the code snippet. PhoneStateListener callStateListener = new PhoneStateListener() { @Override public void onCallStateChanged(int state, String incomingNumber) { if(state == TelelphonyManager.CALL_STATE_RINGING) { Intent launchVideo = new Intent(MyActivity.this, LaunchVideo.class); startActivity(launchVideo); } } }; The PhoneStateListener is added to the TelephonyManager.listen() method like so, telephonyManager.listen(callStateListener, PhoneStateListener.LISTEN_CALL_STATE); Here is the part I'm unclear on, the manifest. What I've tried is the following: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.incomingdemo" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".IncomingVideoDemo" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.ANSWER" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> <activity android:name=".LaunchVideo" android:label="LaunchVideo"> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <uses-permission android:name="android.permission.READ_PHONE_STATE"/> </manifest> I've run the debugger after setting breakpoints in the IncomingVideoDemo activity where the PhoneStateListener is created and none of the breakpoints are hit. Any insights into solving this problem is greatly appreciated. Thanks.

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • Using PHP's IMAP library triggers Kaspersky's Antivirus

    - by TMG
    Hello, I just started today working with PHP's IMAP library, and while imap_fetchbody or imap_body are called, it is triggering my Kaspersky antivirus. The viruses are Trojan.Win32.Agent.dmyq and Trojan.Win32.FraudPack.aoda. I am running this off a local development machine with XAMPP and Kaspersky AV. Now, I am sure there are viruses there since there is spam in the box (who doesn't need a some viagra or vicodin these days?). And I know that since the raw body includes attachments and different mime-types, bad stuff can be in the body. So my question is: are there any risks using these libraries? I am assuming that the IMAP functions are retrieving the body, caching it to disk/memory and the AV scanning it sees the data. Is that correct? Are there any known security concerns using this library (I couldn't find any)? Does it clean up cached message parts perfectly or might viral files be sitting somewhere? Is there a better way to get plain text out of the body than this? Right now I am using the following code (credit to Kevin Steffer): function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure->subtype) { return $primary_mime_type[(int) $structure->type] . '/' .$structure->subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false, $part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION $connection = imap_open($server, $login, $password); $count = imap_num_msg($connection); for($i = 1; $i <= $count; $i++) { $header = imap_headerinfo($connection, $i); $from = $header->fromaddress; $to = $header->toaddress; $subject = $header->subject; $date = $header->date; $body = get_part($connection, $i, "TEXT/PLAIN"); }

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Why does using ASP.NET OutputCache keep returning a 200 OK, not a 304 Not Modified?

    - by Pure.Krome
    Hi folks, i have a simple aspx page. Here's the top of it:- <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Foo.aspx.cs" Inherits="Foo" %> <%@ OutputCache Duration="3600" VaryByParam="none" Location="Any" %> Now, every time I hit the page in FireFox (either hit F5 or hit enter in the url bar) I keep getting a 200 OK response. Here's a sample reply from FireBug :- Request Headers:- GET /sitemap.xml HTTP/1.1 Host: localhost.foo.com.au User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-au,en-gb;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: <snipped> If-Modified-Since: Tue, 01 Jun 2010 07:35:17 GMT If-None-Match: "" Cache-Control: max-age=0 Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 08:35:17 GMT Last-Modified: Tue, 01 Jun 2010 07:35:17 GMT Etag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 07:35:20 GMT Content-Length: 775 Firebug Cache tab:- Last Modified Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Last Fetched Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Expires Tue Jun 01 2010 18:35:17 GMT+1000 (AUS Eastern Standard Time) Data Size 775 Fetch Count 105 Device disk Now, if i try it in Fiddler using the Request Builder (and no extra data) I also keep getting the same 200 OK reply. Request Headers:- GET http://localhost.foo.com.au/sitemap.xml HTTP/1.1 User-Agent: Fiddler Host: foo.com.au Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 07:58:00 GMT Last-Modified: Tue, 01 Jun 2010 06:58:00 GMT ETag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 06:59:16 GMT Content-Length: 775 It looks like it's asking to cache it but it's not :( Server is a localhost IIS7.5 on Win7. (as listed in the Response data). Can anyone see what I'm doing wrong?

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >