Search Results

Search found 7634 results on 306 pages for 'preg replace'.

Page 94/306 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • Command line solution for removing parts from a binary file?

    - by zsero
    I have a binary file and I would like to remove parts from. By removing I mean deleting those parts and thus making the file's size smaller. The parts would be between two ASCII strings. So, for example the file would look like this ........ start ABCD end ..... start EFGH end ..... start IJKL end ........... So in this file, I would like to search for strings "start" and "end" and remove the parts between them. The way I think I can do it is to lookup all the locations for "start" and "end" calculate ranges from that delete those parts Now I am using some GUI based Hex editor and I use the "Search All", "Select Range" and "Delete" commands, but I am sure it would be possible to solve it using some powerful command line hex/text editors. Do you know any solution for this problem which doesn't require using a GUI for looking up, copy & paste on clipboard, select range and delete commands but is just a few lines of command line? I am interested ini both Linux shell scripts or using some command line hex editors under Windows, or even Python scrips are welcome. Do you think it is possible to solve this problem just by a simple Regex replace? Are there any regex replace util which handles binary files well?

    Read the article

  • prevalent, recurring hardrive failure intel macbook from 2006/2007

    - by SimonSalman
    Hi, Long story: My MacBook's hard drive failed one year ago, just a month after its warranty ended or: a year and a month after I bought it. After about ten phone calls to Apple's service, they agreed to extend the warranty for another year, so that I got it replaced free of charge. In the mean time, I got to know that many MacBook users that experience/report hard drive failures. Every reported crash was preceded by a slowdown of system performance, an increased occurrence of the spinning beach ball wait courser, and frequent crashes of applications that used to run very robust until then. It happened (as far as I know) with MacBooks from 2006/2007. All these MacBooks additionally suffer from a recurring wearing down of their "top case". Many heavy users had to replace their HDDs three time since 2006/2007 resulting in an head crash, making it impossible to recover data (diagnosis of recovery specialists) in most but not all cases HDD was Seagate (doesn't necessarily have to be the cause, if majority of the MacBook charge contained Seagate drives) And right now (one year after my first disk crash), these symptoms are cumulating on my system, again ... Short version: prevalent hard drive failure on MacBook charge from 2006/2007 (i.e. 2.16 GHz Intel Core 2 Due) I am looking for any (preferable open source) tool for checking the hard drive condition, especially to detect the known "MacBook problem". So, that I can replace the disk on time. If any Mac user found a solution to prevent the repeated failure of heir hard drive, I would be very glad to get to known it. I really enjoy my old MacBook, but I hate to get interrupted every year by an HDD crash. BTW, the issue is already in discussion for a long time, but there seems to be no solution, so far. Thanks, Simon

    Read the article

  • Cisco Spam Blocker, Iron Port, Lotus Domino, Integration Help

    - by NickToyota
    Hi serverfault universe, I work for a medium sized (roughly 200 user) company. We are attempting to intagrate our new Cisco Spam Video Blocker (ironport) device into our network so that it acts as an incoming filter then passes it off to our Lotus domino mail server. And also vise versa. The way our network is setup currently has an mx record pointing to our Domino mail SMTP incoming server which is currently setup to be an inbound gateway and filter (using symantec domino mail software). We want to replace the inbound gateway with the ironport. Our company has also invested in a pool of external IP addresses which I believe has been currently assigned to our web, email, servers. What would the proper course of action be to successfully integrate the device be? Mx record change? Replace the domino gateway completely with the ironport? We attempted to set the ironport device to the external IP of what our mx record is pointing to without much success. Any help on proper setup would be greatly appreciated.

    Read the article

  • Keyboard issue when using kitty+puttycyg but not when using putty or cygwin alone

    - by kamaradclimber
    I would like to use a unique way to use console on my windows setup. Previously I used putty for remote access to linux servers and cygwin to have unix-like tools on windows. Then I discovered kitty which is a patched putty and have added the puttycyg patch. It provides the same way to connect to remote and local console. However, there is a strange behavior using vim when connected to the local console (using the puttycyg patch) : keys display A/B/C/D and replace the current character by these letter. In insert mode it does replace the caracter, in normal mode, no modification is made to the document even if the caracter is displayed as replaced. For instance, when I type : fixed bug with product deleted I get : fixed bbug wiwith prprodudueleteted I have read a lot of questions about this type of issue 3, 4 and googled it but there is no answer that work for me. The issue is present only for the setup kitty+puttycyg patch : cygwin alone works perfectly (and putty alone works also for access to linux servers). Any help would be appreciated !

    Read the article

  • Windows XP Boot Issue - Diagnosing A Hard Drive Failure

    - by duffymo
    My five-year-old HP desktop running Windows XP SP3 wouldn't boot from the hard drive yesterday afternoon. I would see the boot sequence begin, then nothing but a black screen. Fortunately, I had just done an Acronis backup to my external drive in the morning, and I have a bootable USB key. I put the USB key into the drive, powered up the machine, and put the USB key first in line in the boot sequence. Voila! My machine came alive. But now I'm confused as to what the problem is and what to do next. I assumed that my hard drive was toast. But now that the machine is alive I can see files on my C: drive that have changes I made just yesterday. Clearly the drive is not dead. Here are my questions: What could explain my inability to boot from the hard drive? What would a remedy be? What's my best course of action? Should I replace the hard drive with a new one? If I replace the hard drive, do I reinstall the OS and apply the backup I did yesterday? If I decide that re-installing Windows XP makes no sense, how do I get back the Acronis backup that I did yesterday? I don't want to lose that. UPDATE: I just learned one more key fact. I'm having some work done on my house. I neglected to shut my machine down before the contractor came. My wife said he shut down the power to do some work on a circuit and then powered the house back up. I have a surge protector, but is it possible that cycling the power did some damage?

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • replacing the default console emulator under Windows XP

    - by Gilles
    How can I replace the default program providing console windows under Windows XP? I know of alternative programs, and I have a shortcut to start cmd.exe in Console2. But now I want console applications to start in Console2 rather than the default console program, even when I have no control over the program that starts the console application. (I.e. a non-console program starts consoleapp.exe, and I can't change it to start Console2 instead, but I still want the application to be started inside a new instance of Console2.) (Note that I want to replace the console itself, that is, the window in which console (i.e. text mode) applications run. And I must be able to run arbitrary, unmodified console applications: a substitute for a specific console program such as Cmd won't do me any good.) EDIT: So what I'm after is a CSRSS replacement, which leads to OT: I want to know when Microsoft is going to make a decent CSRSS replacement. Not being able to adjust the width of a "terminal" by resizing the window is a complete joke. Go download the ISE already. (It's included in Win7/2008R2.) But as far as I understand this ISE is an environment for Powershell, not a general console emulator.

    Read the article

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Move OS from RAID5 array to RAID 1 arrays

    - by Antoine
    I want to give a last boost to my old ProLiant ML350 G5 server which just needs to be reliable for a few more year only ! With a defined budget of about 1500$ (I do not have more), i plan to replace the CPU (+ adding a second one), the battery cache of my raid controller (E200i), double the RAM, and change all hard drives. I have 7 HDD (SAS 10krpm, 72Gb) + 1 spare in RAID5, and my system is all FULL (no empty tray, full disks). in my current RAID5 array, I have 2 partitions: - 1 OS partition, 20Gb - 1 data partition, 350 Gb I plan to replace these 8 disks with : - 2 x 300Gb SAS 15krpm in RAID 1 (= 1 partition for OS) - 2 x 2Tb SATA 7.2krpm in RAID 1 (= 1 partition for DATA) My biggest constraint is that I have only 01 day to upgrade my server. Therefore, I'm looking for cloning all my files (OS + data partition) to my new arrays, i.e : - the OS partition shall be cloned to the RAID1 "2x300Gb array" - the data partition shall be cloned to the RAID1 "2x2Tb array" My second problem is that I need to physically remove all the old hard drives before inserting the new ones. I'm running Windows Server 2003 R2, and even if MS support will expire soon, I cannot buy a new licence and spent time in configuration. Obviously, with 1500$, I cannot also buy a new server that I could start configuring from now ! Thought about ASR (NTBackup), but I have no floppy drive (and do not really want to invest in one !) Thought about a clonezilla clone, and read this interesting link : Windows Server 2003 - move C: partition to a new SAS disk , but i'm not so confident in using Clonezilla with RAID5. What should be the best option to quickly and easily (if possible!) "copy/paste" my OS (so no need to reinstall and reconfigure all) and DATA / programs / services, etc... ? Thanks for your comments

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Is 1GB + 1GB RAM better than 2GB +0.5GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • XSL 2.0 unparsed text and formatting

    - by Maha
    I want the unparsed text to be formatted for bold characters or increase font-size based on the tag the example here is for replace the searched word with bold characters Example: test <b> how to <b> when bold <b> when there is more <b> than one place to bold Can you please advice what is wrong here? <xsl:variable name="tcline" select="unparsed-text('generic_tc.txt','UTF-8')"/> <xsl:analyze-string select="$tcline" regex="\'<b>'(.*?)\'<b>'"> <xsl:matching-substring> <xsl:value-of select="replace($tcline,'\"<b>"(.*?)\"<b>"','<em>$1\/em/g;')"/> </xsl:matching-substring> <xsl:non-matching-substring> <xsl:value-of select="."/> </xsl:non-matching-substring> </xsl:analyze-string>

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • jqgrid setting cutom formatter to dynamic column collection

    - by user312249
    I am using jqgrid. We are building a dashboard functionality with jquery. Different application just have to register respective application page and dashboard will render that page.To achieve this we are using jqgrid as one of the jquery plugin. Following is my codeenter code here var ph = '#' + placeHolder; var _prevSort; $.ajax({ url: dataUrl, dataType: "json", async: true, success: function(json) { pager = $('#' + pager); if (json.showPager === "false") { pager = eval(json.showPager); } dataUrl += "&jqSession=true"; $(ph).jqGrid({ url: dataUrl, datatype: "json", sortclass: "grid_sort", colNames: JSON.parse(json.colNames), colModel: JSON.parse(json.colModel), forceFit: true, rowNum: json.rowNum, rowList: JSON.parse(json.rowList), pager: pager, sortname: json.sortName, caption: json.caption, viewrecords: true, viewsortcols: true, sortorder: json.sortOrder, footerrow: summaryFooter, userDataOnFooter: summaryFooter, jsonReader: { root: "rows", row: "row", repeatitems: false, id: json.sortName }, gridComplete: function() { if (showFooter) { $(ph).append("" + json.footerRow + ""); } if (json.additionalContent != null) { $("#" + xContID).html(json.additionalContent); } $("ui-icon-asc").append("IMG"); var _rows = $(".jqgrow"); if (json.rows.length 0) { for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } var gMaxHeight = getGridMaxHeight(); var gHeight = ($(ph + " tr").length + 1) * ($($(".jqgrow") [0]).height()); if (gHeight <= gMaxHeight) { $(ph).parent().height(gHeight); } else { $(ph).parent().height(gMaxHeight); } } else { $(ph).prepend("" + gridNoDataMsg + ""); $(ph).parent().height(60); } }, onSortCol: function(index, iCol, sortorder) { dataUrl = dataUrl.replace("&jqSession=true", ""); $(ph).jqGrid().setGridParam({ url: dataUrl }).trigger("reloadGrid"); var colName = "#jqgh" + index; // $(_prevSort).parent().removeClass("ui-jqgrid-sorted"); // $(_prevSort).parent().addClass("ui-state-default"); // $(_colName).parent().addClass("ui-jqgrid-sorted"); // $(_colName).parent().removeClass("ui-state-default"); _prevSort = _colName; var _rows = $(".jqgrow"); for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } } }).navGrid('#' + pager, { search: false, sort: false, edit: false, add: false, del: false, refresh: false }); // end of grid $("#" + loadid).empty(); gGridIds[gGridIds.length] = placeHolder; SetGridSizes(); }, error: function() { $("#" + loadid).html(loadingErr); } }); As you can see from the code i am getting column collection dynamically(Appication page which i am calling will give me JSON in the response and will have colNames collection in it. Evrything is working fine but, only issue is coming when we are trying to apply custom formatter to column. This issue comes only when we are dynamically assign "colModel" to jqgrid. Appreciate help Thanks in advance

    Read the article

  • SMO ConnectionContext.StatementTimeout setting is ignored

    - by Woody
    I am successfully using Powershell with SMO to backup most databases. However, I have several large databases in which I receive a "timeout" error "System.Data.SqlClient.SqlException: Timeout expired". The timout consistently occurs at 10 minutes. I have tried setting ConnectionContext.StatementTimeout to 0, 6000, and to [System.Int32]::MaxValue. The setting made no difference. I have found a number of Google references which indicate setting it to 0 makes it unlimited. No matter what I try, the timeouts consistently occur at 10 minutes. I even set Remote Query Timeout on the server to 0 (via Studio Manager) to no avail. Below is my SMO connection where I set the time out and the actual backup function. Further below is the output from my script. UPDATE Interestingly enough, I wrote the backup function in C# using VS 2008 and the timeout override does work within that environment. I am in the process of incorporating that C# process into my Powershell Script until I can find out why the timeout override does not work with just Powershell. This is extremely annoying! function New-SMOconnection { Param ($server, $ApplicationName= "PowerShell SMO", [int]$StatementTimeout = 0 ) # Write-Debug "Function: New-SMOconnection $server $connectionname $commandtimeout" if (test-path variable:\conn) { $conn.connectioncontext.disconnect() } else { $conn = New-Object('Microsoft.SqlServer.Management.Smo.Server') $server } $conn.connectioncontext.applicationName = $applicationName $conn.ConnectionContext.StatementTimeout = $StatementTimeout $conn.connectioncontext.Connect() $conn } $smo = New-SMOConnection -server $server if ($smo.connectioncontext.isopen -eq $false) { Throw "Could not connect to server $($server)." } Function Backup-Database { Param([string]$dbname) $db = $smo.Databases.get_Item($dbname) if (!$db) {"Database $dbname was not found"; Return} $sqldir = $smo.Settings.BackupDirectory + "\$($smo.name -replace ("\\", "$"))" $s = ($server.Split('\'))[0] $basedir = "\\$s\" + $($sqldir -replace (":", "$")) $dt = get-date -format yyyyMMdd-HHmmss $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') $dbbk.Action = 'Database' $dbbk.BackupSetDescription = "Full backup of " + $dbname $dbbk.BackupSetName = $dbname + " Backup" $dbbk.Database = $dbname $dbbk.MediaDescription = "Disk" $target = "$basedir\$dbname\FULL" if (-not(Test-Path $target)) { New-Item $target -ItemType directory | Out-Null} $device = "$sqldir\$dbname\FULL\" + $($server -replace("\\", "$")) + "_" + $dbname + "_FULL_" + $dt + ".bak" $dbbk.Devices.AddDevice($device, 'File') $dbbk.Initialize = $True $dbbk.Incremental = $false $dbbk.LogTruncation = [Microsoft.SqlServer.Management.Smo.BackupTruncateLogType]::Truncate If (!$copyonly) { If ($kill) {$smo.KillAllProcesses($dbname)} $dbbk.SqlBackupAsync($server) } $dbbk } Started SQL backups for server LCFSQLxxx\SQLxxx at 05/06/2010 15:33:16 Statement TimeOut value set to 0. DatabaseName : OperationsManagerDW StartBackupTime : 5/6/2010 3:33:16 PM EndBackupTime : 5/6/2010 3:43:17 PM StartCopyTime : 1/1/0001 12:00:00 AM EndCopyTime : 1/1/0001 12:00:00 AM CopiedFiles : Status : Failed ErrorMessage : System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The backup or restore was aborted. 10 percent processed. 20 percent processed. 30 percent processed. 40 percent processed. 50 percent processed. 60 percent processed. 70 percent processed. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Ended backups at 05/06/2010 15:43:23

    Read the article

  • Problem with XML Deserialization C#

    - by alex
    I am having trouble with XML deserialization. In a nutshell - I have 2 classes: SMSMessage SMSSendingResponse I call an API that takes a bunch of parameters (represented by SMSMessage class) It returns an XML response. The response looks like this: <?xml version="1.0" encoding="utf-8"?> <data> <status>1</status> <message>OK</message> <results> <result> <account>12345</account> <to>012345678</to> <from>054321</from> <message>Testing</message> <flash></flash> <replace></replace> <report></report> <concat></concat> <id>f8d3eea1cbf6771a4bb02af3fb15253e</id> </result> </results> </data> Here is the SMSMessage class (with the xml serialization attributes so far) using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("results")] public class SMSMessage { public string To { get { return Result.To; } } public string From { get { return Result.From; } } public string Message { get { return Result.Message; } } [XmlElement("result")] public Result Result { get; set; } } } Here is SMSMessageSendingResponse: using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("data")] public class SMSSendingResponse { //should come from the results/result/account element. in our example "12345" public string AccountNumber { get { return SMSMessage.Result.AccountNumber; } } //should come from the "status" xml element [XmlElement("status")] public string Status { get; set; } //should come from the "message" xml element (in our example - "OK") [XmlElement("message")] public string Message { get; set; } //should come from the "id" xml element (in our example - "f8d3eea1cbf6771a4bb02af3fb15253e") public string ResponseID { get { return SMSMessage.Result.ResponseID; } } //should be created from the results/result element - ignore flash, replace, report and concat elements for now. [XmlElement("results")] public SMSMessage SMSMessage { get; set; } } } Here is the other class (Result) - I want to get rid of this, so only the 2 previously mentioned classes remain using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("result")] public class Result { [XmlElement("account")] public string AccountNumber{ get; set; } [XmlElement("to")] public string To { get; set; } [XmlElement("from")] public string From { get; set; } [XmlElement("message")] public string Message { get; set; } [XmlElement("id")] public string ResponseID { get; set; } } } I don't want SMSMessage to be aware of the SMSSendingResponse - as this will be handled by a different part of my application

    Read the article

  • DataTable to JSON

    - by Joel Coehoorn
    I recently needed to serialize a datatable to JSON. Where I'm at we're still on .Net 2.0, so I can't use the JSON serializer in .Net 3.5. I figured this must have been done before, so I went looking online and found a number of different options. Some of them depend on an additional library, which I would have a hard time pushing through here. Others require first converting to List<Dictionary<>>, which seemed a little awkward and needless. Another treated all values like a string. For one reason or another I couldn't really get behind any of them, so I decided to roll my own, which is posted below. As you can see from reading the //TODO comments, it's incomplete in a few places. This code is already in production here, so it does "work" in the basic sense. The places where it's incomplete are places where we know our production data won't currently hit it (no timespans or byte arrays in the db). The reason I'm posting here is that I feel like this can be a little better, and I'd like help finishing and improving this code. Any input welcome. public static class JSONHelper { public static string FromDataTable(DataTable dt) { string rowDelimiter = ""; StringBuilder result = new StringBuilder("["); foreach (DataRow row in dt.Rows) { result.Append(rowDelimiter); result.Append(FromDataRow(row)); rowDelimiter = ","; } result.Append("]"); return result.ToString(); } public static string FromDataRow(DataRow row) { DataColumnCollection cols = row.Table.Columns; string colDelimiter = ""; StringBuilder result = new StringBuilder("{"); for (int i = 0; i < cols.Count; i++) { // use index rather than foreach, so we can use the index for both the row and cols collection result.Append(colDelimiter).Append("\"") .Append(cols[i].ColumnName).Append("\":") .Append(JSONValueFromDataRowObject(row[i], cols[i].DataType)); colDelimiter = ","; } result.Append("}"); return result.ToString(); } // possible types: // http://msdn.microsoft.com/en-us/library/system.data.datacolumn.datatype(VS.80).aspx private static Type[] numeric = new Type[] {typeof(byte), typeof(decimal), typeof(double), typeof(Int16), typeof(Int32), typeof(SByte), typeof(Single), typeof(UInt16), typeof(UInt32), typeof(UInt64)}; // I don't want to rebuild this value for every date cell in the table private static long EpochTicks = new DateTime(1970, 1, 1).Ticks; private static string JSONValueFromDataRowObject(object value, Type DataType) { // null if (value == DBNull.Value) return "null"; // numeric if (Array.IndexOf(numeric, DataType) > -1) return value.ToString(); // TODO: eventually want to use a stricter format // boolean if (DataType == typeof(bool)) return ((bool)value) ? "true" : "false"; // date -- see http://weblogs.asp.net/bleroy/archive/2008/01/18/dates-and-json.aspx if (DataType == typeof(DateTime)) return "\"\\/Date(" + new TimeSpan(((DateTime)value).ToUniversalTime().Ticks - EpochTicks).TotalMilliseconds.ToString() + ")\\/\""; // TODO: add Timespan support // TODO: add Byte[] support //TODO: this would be _much_ faster with a state machine // string/char return "\"" + value.ToString().Replace(@"\", @"\\").Replace(Environment.NewLine, @"\n").Replace("\"", @"\""") + "\""; } }

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >