Search Results

Search found 1807 results on 73 pages for 'levels'.

Page 49/73 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • Set up WLAN in 3-level house

    - by Balint Erdi
    I'm having a hard time setting up the network in our house. It has three levels (basement, ground floor, first level). The WLAN is set up by an ASUS RT-N12 router which provides perfect coverage for the ground floor and the basement. However, I set up my "home office" in the basement where the signal barely arrived. So I purchased a TP-Link TL-WA901ND (300 Mbps) Access Point which I set up in the other corner of the ground floor to expand the ASUS router's range. I used the AP's Repeater mode for that. The distance between my computer and the TP-Link AP is 6-7 meters. There is a staircase going down from the ground floor to the basement so there are no solid walls between the computer and the AP. This setup mostly works (I am writing this from the basement) but it is not reliable (the signal strength sometimes goes down to ~40% of the max) sometimes so I wonder if I am doing it correctly or if there is a better way. Screenshot of the router's and the AP's dashboard screen follow: Any comments on what I am doing wrong or hints for improvement are appreciated. Thank you. UPDATE Tried one more thing, setting up the TP-LINK AP in Access Point mode. That way, I can make it use a different SSID. I enabled WDS/Bridge so that it expands the range of the ASUS router (see screenshot). That does not work, either, if I connect to the network set up by the TP-LINK device (PELSTER-2), I can't reach the external network (the Internet). It seems the problem always comes back to this, the TP-LINK does not have access to the external network, whatever its mode of operation.

    Read the article

  • Need to Remove Exchange 2003 Server That Crashed During Transition to 2010

    - by ThaKidd
    As the title stated, we were running an Exchange 2003 server that we knew was going down soon so we purchased a second server and installed Exchange 2010 into the AD. We managed to move all of the mailboxes off of 2003 and also managed to get the Offline Address Book setup on 2010. At this point the 2003 server bit the dust and will no longer boot. Therefore we were unable to properly uninstall Exchange and remove the last 2003 server so it still exists in AD. As far as the clients are concerned, everything is working properly. However, when I run the Microsoft Exchange Profile Analyzer, I still see the old server and its Administrative Group. I am going to guess that since the old server is showing up in AD, I will not be able to raise Exchange or AD functionality (as the 2003 server was also the only AD DC) levels. I have forced the 2003 DC out of AD so that is no longer an issue. Old Setup: Windows 2003 Server Enterprise & Exchange 2003 Standard New Setup: Windows 2010 Server Enterprise & Exchange 2010 Standard Two Questions: How do you go about manually forcing the 2003 server and its administrative group out of AD? When that is finished, where do you raise the Exchange mode (can't find this for the life of me)?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Rsyslogd not listening on port

    - by amorfis
    I installed rsyslogd on ubuntu server, started it and everything looks fine, but the port the server should listen on is not opened. ubuntu@node7:~$ sudo service rsyslog restart rsyslog stop/waiting rsyslog start/running, process 14114 Netstat shows it is not listening: ubuntu@node7:~$ netstat -tlan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 320 172.22.0.17:22 10.8.8.38:61335 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 :::2776 :::* LISTEN tcp6 0 0 :::2777 :::* LISTEN tcp6 0 0 172.22.0.17:2777 172.22.0.11:56554 ESTABLISHED tcp6 0 0 172.22.0.17:2776 172.22.0.11:39780 ESTABLISHED This is how /etc/rsyslog.conf looks like (most comments omitted): ubuntu@node7:~$ cat /etc/rsyslog.conf ################# #### MODULES #### ################# $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $RepeatedMsgReduction on $WorkDirectory /var/spool/rsyslog $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup adm $IncludeConfig /etc/rsyslog.d/*.conf In /etc/rsyslog.d/35-server-per-host.conf I have following lines, and I suspect this can be the cause. What does it mean? # Stop processing of all non-local messages. You can process remote messages # on levels less than 35. :fromhost-ip,!isequal,"127.0.0.1" ~ and if it is, how could I change it to have server listening and receiving and logging messages? UPDATE: I commented out suspected line, but still it's not listening on port 514

    Read the article

  • Access 2007: How can I make this EXPRESSION less complex?

    - by Mike
    Access is telling me that my new expression is to complex. It used to work when we had 10 service levels, but now we have 19! Great! My expression is checking the COST of our services in the [PriceCharged] field and then assigning the appropriate HOURS [Servicelevel] when I perform a calculation to work out how much REVENUE each colleague has made when working for a client. The [EstimatedTime] field stores the actual hours each colleague has worked. [EstimatedTime]/[ServiceLevel]*[PriceCharged] Great. Below is the breakdown of my COST to HOURS expression. I've put them on different lines to make it easier to read - please do not be put off by the length of this post, it's all the same info in the end. Many thanks,Mike ServiceLevel: IIf([pricecharged]=100(COST),6(HOURS), IIf([pricecharged]=200 Or [pricecharged]=210,12.5, IIf([pricecharged]=300,19, IIf([pricecharged]=400 Or [pricecharged]=410,25, IIf([pricecharged]=500,31, IIf([pricecharged]=600,37.5, IIf([pricecharged]=700,43, IIf([pricecharged]=800 Or [pricecharged]=810,50, IIf([pricecharged]=900,56, IIf([pricecharged]=1000,62.5, IIf([pricecharged]=1100,69, IIf([pricecharged]=1200 Or [pricecharged]=1210,75, IIf([pricecharged]=1300 Or [pricecharged]=1310,100, IIf([pricecharged]=1400,125, IIf([pricecharged]=1500,150, IIf([pricecharged]=1600,175, IIf([pricecharged]=1700,200, IIf([pricecharged]=1800,225, IIf([pricecharged]=1900,250,0)))))))))))))))))))

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • Help with memory usage issues on VPS

    - by Niall Collins
    Hi there, I am running a VPS server with 6 .net web sites/applications running on it. I am having issues with performance on the server, mainly it running out of memory. I contacted the company that lease the server to me and they told me it was because I also had sql server 2008 express also running on the server. So I went ahead and removed this, uninstalled etc. However I still seem to be having issues. For example at present, looking at resource consumption, the virtual memory is: ID: vprvmem Current Use: 894,328,832 bytes Limit: 1,073,741,824 bytes This means useage of ~80%. Is there any way I can check out exactly that applications, web sites, software is taking up most of the servers memory, so I can look at rectifying it. I feel that 80% is much to high to allow for contingency for a spike in traffic. I have got extra memory resources added to the box recently, but I would prefer finding the source of the problem rather than throwing extra memory at it. Maybe these levels are correct and alls running ok, but would like to investigate it to make sure. My knowledge of hardware is limited as I mostly deal in the spectrum of software. So any tools out there that can help me or any pertient advice.

    Read the article

  • Speakers silent, headphones work in Ubuntu 9.04

    - by CarlF
    I'm running Ubuntu 9.04. Worked fine for months, then I rebooted yesterday after weeks of continuous operation. Now audio won't play through the speakers. The USB headset works fine, but the Conexant audio (CX20549) does not. Weirdly, it thinks it's playing. pavumeter shows appropriate levels, volume looks OK in alsamixer, but no sound. I did find this page: http://www.eugeneteplitsky.com/fixing-silent-pulseaudio-in-ubuntu-9-04/ Unfortunately the advice there doesn't help me. For one thing, the syntax for the alsa-base.conf file is apparently not actually documented anywhere. For another, my chipset isn't listed in the kernel.org docs he links to! EDIT: would upgrading to 9.10 help? Is there a major change in the audio subsystem between 9.04 and 9.10? Any suggestions? EDIT 2: This is stranger than I thought. Sound works normally in Xine, but is silent in Audacity, VLC and mplayer. What the?

    Read the article

  • Some free cloud solution to enhance your business

    - by Saif Bechan
    I am co-owner of a small internet business. I am in charge of IT, and I try to get things done as low cost as possible. When investing in servers, resources and overall business costs your project can soon turn into a financial disaster. Cloud solutions can help you in solving some financial problems, they can help you in scalability problems, and overall performance problems of your server or web application. Recently I moved the whole internal/external communication(email,calendar,documents) of my business to the cloud. I did this by using the free version of Google Apps. This works great and is a big advantage on multiple levels. I do not have to fight spam anymore on my system, and there are less resources used on my system. Also switching servers will go a lot easier. Questions Can you name some cloud solution that you have used, or some you just recommend. They can fairy form financial benefits, organizational benefits, performance benefits. It doesn't matter as soon as it helps you spread the load of your business.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Loss of network connectivity when playing video on Optoma HD180 projector

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas?

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • LVM and cloning HDs

    - by jcea
    Using Linux, I have several backup levels. One of them is a periodical sector by sector copy (using dd) of my laptop harddisk to an external USB disk. Yes, I have other backups too, like remote rsync. This approach (the disk dd) is OK when cloning a HDD with no LVM volumes, since I can plug the external disk anytime and mount the partitions simply mounting /dev/sdb* instead of /dev/sda*. Trivial and handy. Today I moved ALL my harddisk (including the /boot) to LVM. Everything works fine. I will stress it for a couple of days, and then I will do a sector by sector copy to my external harddisk. Now I have a problem, I guess. If in the future I plug the external USB HDD to recover any file, the OS will detect a duplicate LVM configuration, with the same name and the same UUID. Even doing a vgrename (which LVM would be renamed, the internal HDD or the external HDD?), the cloned UUID will not change. Is there any command to change name and UUID? Ideally I would clone the HDD and then change the LVM group name and its UUID, but I don't know how to do it. Another related issue would be... In the past I have booted my laptop using the external disk, using the BIOS boot menu and changing GRUB entries manually to boot from /dev/sdb instead of /dev/sda. But now my current GRUB configuration boots directly from a LVM logical volume, something like: set root='(LVM-root)' in my grub.cfg. So... What is going to happen with duplicated volumes? Any suggestion? I guess I could repartition my external harddisk and change backup strategy from dd to rsync, but this disk has windows installed too, and I really would like to have a physical "real" copy.

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • Windows XP: How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because this (empty) directory is still undeleteable: C:\1\2\Favorites\Wien\What To Do.. I'm guessing because of the ".." at the end, Windows (Explorer and cmd) can't deal with it: Here is what Explorer properties shows: Any ideas?

    Read the article

  • SSL wildcard certificates and trailing 'www'

    - by user173326
    I've got a wildcard SSL certificate for *.mydomain.com. I'm using nginx, and redirecting all traffic for http to https, and also rewriting the URLs without a trailing www (if there is one). So it has, 1) http://subdomain.mydomain.com ---> https://subdomain.mydomain.com 2) http://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 3) https://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 4) https://subdomain.mydomain.com ---> https://subdomain.mydomain.com However, since my cert is for *.mydomain.com, case 3 gets an SSL error in chrome ('This is probably not the site that you are looking for!'), but if you click through it gets redirected and all is well. I understand why, since the initial connection is for https with a www (2 levels of subdomains), which doesn't match what is on the wildcard certificate. I thought a solution would be to get an additional cert for *.*.mydomain.com to cover www.*.mydomain.com. But it seems like that won't work. I spoke to agents from namecheap and comodo, and both said *.*.mydomain.com was not possible. I also came across this: https://support.quovadisglobal.com/KB/a60/will-ssl-work-with-multilevel-wildcards.aspx Is there a solution to this? To be able to cover www.*.mydomain.com?

    Read the article

  • MSSQL Timing out, a couple of questions...

    - by user29000
    Hi, About once a week, my MSSQL server is timing out, or rather the machine runs out of RAM. This morning it reached 3.9GB of the available 4, with MSSQL taking up 2.5GB. I'm concerned that i've not configured SQL to release memory as it should, so I ran sp_who2 while the timeouts were occuring to see what process were running. If i could post the CSV datafile i would, however, there were 85 processes in total, mostly related to the Full Text service: FT Gatherer - About 35 of these running under the 'sa' account against the master database with status of either sleeping or background, many were dependant on other processes. Is that normal? MySite database - There were only 5 processes for the one active site/database and all were either sleeping or suspended - but their lastBatch dates were set to 1/12/2020. Is that normal? The datbase is only about 20mb in size the traffic levels are very low, so i'm thinking of maybe limiting the amount of RAM SQL has access to (from unlimted to maybe 2GB). Any thoughts / advise would be appreciated. Mny thanks Ben

    Read the article

  • RAID6 mdraid -> LVM -> EXT4 root with GRUB2?

    - by Rotonen
    2012-03-31 Debian Wheezy daily build in VirtualBox 4.1.2, 6 disk devices. My steps to reproduce so far: Setup one partition, using the entire disk, as a physical volume for RAID, per disk Setup a single RAID6 mdraid array out of all of those Use the resulting md0 as the only physical volume for the volume group Setup your logical volumes, filesystems and mount points as you wish Install your system Both / and /boot will be in this stack. I've chosen EXT4 as my filesystem for this setup. I can get as far as GRUB2 rescue console, which can see the mdraid, the volume group and the LVM logical volumes (all named appropriately on all levels) on it, but I cannot ls the filesystem contents of any of those and I cannot boot from them. As far as I can see from the documentation the version of GRUB2 shipped there should handle all of this gracefully. http://packages.debian.org/wheezy/grub-pc (1.99-17 at the time of writing.) It is loading the ext2, raid, raid6rec, dosmbr (this one is in the list of modules once per disk) and lvm modules according to the generated grub.cfg file. Also it is defining the list of modules to be loaded twice in the generated grub.cfg file and according to quick Googling around this seems to be the norm and OK for GRUB2. How to get further by getting GRUB2 to actually be able to read the content of the filesystems and boot the system? What am I wrong about in my assumptions of functionality here? EDIT (2012-04-01) My generated grub.cfg: http://pastie.org/3708436 It seems it first makes my /usr logical volume the root and that might be source of the failure? A grub-mkconfig bug? Or is it supposed to get access to stuff from /usr before / and /boot? /boot is on / for me - no separate boot logical volume.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • How to convert Markdown files to Dokuwiki, on a PC

    - by Clare Macrae
    I'm looking for a tool or script to convert Markdown files to Dokuwiki format, that will run on a PC. This is so that I can use MarkdownPad on a PC to create initial drafts of documents, and then convert them to Dokuwiki format, to upload to a Dokuwiki installation that I have no control over. (This means that the Markdown plugin is no use to me.) I could spend time writing a Python script to do the conversion myself, but I'd like to avoid spending time on this, if such a thing exists already. The Markdown tags I'd like to have supported/converted are: Heading levels 1 - 5 Bold, italic, underline, fixed width font Numbered and unnumbered lists Hyperlinks Horizontal rules Does such a tool exist, or is there a good starting point available? Things I've found and considered I initially thought that txt2tags would be helpful, but although it can write both markdown and Dokuwiki, it is very tied to its own specific input format I've also seen Markdown2Dokuwiki, and although I'd certainly be willing to use a sed script, even on a PC, this only supports a tiny, tiny part of Markdown's syntax. python-markdown2 also sounded promising, but it only writes out HTML. pandoc - but it doesn't support Dokuwiki output MultiMarkdown - does not appear to support Dokuwiki output

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >