Search Results

Search found 10686 results on 428 pages for 'enterprise architect daily notes'.

Page 385/428 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • Dual booting Windows 7 & 8.1, using the Windows 8 Startup Options Menu, when Windows 8.1 is already installed and you want to add Windows 7

    - by Josh
    There are many excellent guides out there that explain how to dual-boot Windows 7 & 8. However, they are written for people starting with a Windows 7 installation and add a Windows 8 installation to separate partition. From what I'm reading, following this procedure will result in Windows 8 installing and configuring the Startup Options Menu with an option to boot Windows 7 & 8. However, in my situation I have a Windows 8.1 machine that I want to install Windows 7 on, and enable dual-boot, where I can use the Startup Options Menu to select the OS to boot. I haven't been able to determine how to do this. From everything I've been able to find, it looks like if I install Windows 7, it is going to take over the boot loader process, and I won't have access to the Windows 8 "Startup Options Menu." This answer suggests I boot to VHD, but notes a drawback: You can't do this if the C:\drive is encrypted using ANY encryption shceme. Be that BitLocker or 3rd party. The location of the .VHD file you are booting to must reside on an unencrypted volume. Well, that's a bummer, because that's exactly what I wanted to do--I wanted my Windows 7 partition to be encrypted, and my Windows 8 partition to also be encrypted. The idea being that when OS was booted, it was completely locked out from accessing data on the other OS's partition. At this point, I'm thinking my only option is to install Windows 7, and then re-install Windows 8, which will give me the dual-boot option... am I right? Or is there a way to make this work. I'm thinking that I would need to figure out a process like this: Configure the Windows Startup Options Menu with a "blank" entry for Windows 7, pointing to an empty partition Insert the Windows 7 installation media, install Windows 7, and somehow restrict it to that partition (i.e., prevent it from "taking over" from the Startup Options Menu" Is this possible, and if so, how can I accomplish this? My concern is that if I simply install Windows 7 to a separate partition, Windows 7 will take over the entire boot process and I won't be able to get to my Windows 8 installation any more.

    Read the article

  • Beautifulsoup recursive attribute

    - by Marcos Placona
    Hi, trying to parse an XML with Beautifulsoup, but hit a brick wall when trying to use the "recursive" attribute with findall() I have a pretty odd xml format shown below: <?xml version="1.0"?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <genre>Computer</genre> <price>44.95</price> <publish_date>2000-10-01</publish_date> <description>An in-depth look at creating applications with XML.</description> <catalog>true</catalog> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <genre>Fantasy</genre> <price>5.95</price> <publish_date>2000-12-16</publish_date> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> <catalog>false</catalog> </book> </catalog> As you can see, the catalog tag repeats inside the book tag, which causes an error when I try to to something like: from BeautifulSoup import BeautifulStoneSoup as BSS catalog = "catalog.xml" def open_rss(): f = open(catalog, 'r') return f.read() def rss_parser(): rss_contents = open_rss() soup = BSS(rss_contents) items = soup.findAll('catalog', recursive=False) for item in items: print item.title.string rss_parser() As you will see, on my soup.findAll I've added recursive=false, which in theory would make it no recurse through the item found, but skip to the next one. This doesn't seem to work, as I always get the following error: File "catalog.py", line 17, in rss_parser print item.title.string AttributeError: 'NoneType' object has no attribute 'string' I'm sure I'm doing something stupid here, and would appreciate if someone could give me some help on how to solve this problem. Changing the HTML structure is not an option, this this code needs to perform well as it will potentially parse a large XML file. Thanks in advance, Marcos

    Read the article

  • How can I play my MP3 files through my stereo system?

    - by Joel Coehoorn
    Here's the situation. Like many others I have my entire CD collection ripped to my PC, along side other music I've acquired through iTunes or Amazon MP3. Also like many others the speakers at my PC are underpowered, and likely included in my monitor as an afterthought. This is fine for most use: system sounds, YouTube, etc. Even games sounds and music. But I'd like something a little better for when I really want to listen to music. And I have it; in the next room — barely 25 feet away as the crow flies — sits a nice 400 watts stereo system. The stereo supports MP3 CDs, so up to this point I've just kept a few CD-RW disks around to keep most of my collection available. But it's time to move on to something a little more sophisticated. What are my options for using the MP3 files available on my computer as an input for this stereo? Some notes: I want to be able to control what song the stereo is playing without having to go to the PC, including setting up and retrieving playlists. Ideally this should even be able to wake the PC from sleep mode to start playing. I primarily use Windows Media Player on the PC (which runs Windows Vista). However, the files themselves live on a server running Windows Server 2008, and so I could also install something on the server and run everything from there. The axillary input on the stereo is unfortunately limited to a 1/8 inch stereo mini-plug. I'm loath to run wires across two rooms, and I'm considering moving the stereo to the garage at some point. Therefore a wireless solution that can easily cover about 100 ft or so is preferred. I already have a Wi-Fi network ready, but it's secured so anything using Wi-Fi should make it easy to set up security. Bonus points for doing it in under $85 shipped at Amazon (I'm hoping to pay for this via $85 worth of Amazon gift cards). I know this a pretty tight budget, so just getting close is okay. Bonus points for something that remembers multiple profiles (keep my favorite songs separate from the wife's). Bonus points for a remote that can also replace my stereo remote, so I only need one device to control everything. I'm not holding my breath on this one given my price range, though. Bonus points if I can also use for Internet radio. Doing some research on my own as well. This looks like it'll do exactly what I want, but it lists at an outrageous $299: http://www.linksysbycisco.com/US/en/products/DMP100

    Read the article

  • Exchange 2007 Standard Edition

    - by Phrontiste
    We Have : Exchange 2007 Standard Edition IBM System X3650 2 x Intel Xeon 5430 2.66 GHz Version 8.1 Build 240.6 Mailbox, Hub Transport, Client Access Role Installed on One Box Total Number of Mailboxes : 110 - 130 6 Physical Disks Disk 0,1 (68 GB) = Raid-1, OS Partition ( C: Partition) Disk 2,3 (279GB) = Raid-1, Exchange Database (First and Second Storage Groups) ( D: Partition ) Disk 4,5 (68 GB) = Raid-1, Exchange Transaction Logs ( E: Partition ) Setup: Storage Groups : D:\First Storage group\Mailbox database.edb Storage Groups : D:\Second Storage Group\Public Folder Database.edb Transaction Logs : E Partition Problem 1: On our D Partition (Mailbox Database Partition), total size is 279 GB, free space remaining is 64.7 GB, when I select the first storage group and second storage group folders and right click properties they report a size of 165 GB. Mailbox database reports a size of 157GB when right clicked Properties. where as the size displayed in the folder is 164,893,456 KB So, we are missing around 50-54 GB, there is nothing else on these drives, no page file, nothing at all. The partition housing the Transaction logs is reporting the sizes accurately. Any suggestions / fixes on the above ? Problem 2: As you may have already read in Problem 1, the size of the mailbox database is 157GB or 164GB reported; which is not recommended, a) What would you suggest we should do to divide mailboxes in storage groups on this same server ? b) How would we move mailboxes into different storage groups ? c) This is the information store size ? (Am I right in thinking that this is not recommended) d) Having multiple storage groups with one Mailbox DB in each, would that reduce the size of the Information Store? e) Any suggestions / how-to reduce the size of information store ? We didn't install this, we have inherited this - what other recommendations you can make in order to keep ourselves better prepared for any server disaster? We are backing up with Yosemite Backup on RD1000 (320GB) at the moment, which is backing up successfully, flushing the logs daily. We haven't done a test restore YET. I have tried to provide as much info as possible, please let me know if you need further info. Also, we haven't yet faced any problems in mailflow, access speeds, everything is working fine, we have two to five people accessing OWA or Outlook via vpn only. Thanks for your time to read the above - will look forward to your expert suggestions.

    Read the article

  • Kickstart based installation of RHEL 6.4 from USB

    - by Peter
    I want to setup some brand new servers with RHEL 6.4. Servers do not have DVD, so, I have to use USB for the installation. I already have a custom ISO with a kickstart file that I use on servers with DVD flawlessly. I used iso2usb to move the ISO t? my USB. When I boot from the USB, the ks file is found, anaconda starts, but then stops with the following error: "The installation source given by device ['sda1'] could not be found. Please check your parameters and try again" Notes: The USB IS the sda. My custom ISO file is renamed to linux.iso from iso2usb and it is present in the root directory of the USB. Kickstart file has the following entry: harddrive --partition=sda1 --dir=/ Please help me to automate the installation with kickstart. Edit 1: This the anaconda.log file: 09:01:57,029 INFO : no /etc/zfcp.conf; not configuring zfcp 09:01:57,259 INFO : created new libuser.conf at /tmp/libuser.4rAbps with instPath="/mnt/sysimage" 09:01:57,259 INFO : anaconda called with cmdline = ['/usr/bin/anaconda', '--stage2', 'hd:sda1:///images/install.img', '--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang', 'en_US.UTF-8', '--keymap', 'us', '--repo', 'hd:sda1:/'] 09:01:57,260 INFO : Display mode = g 09:01:57,260 INFO : Default encoding = utf-8 09:01:59,444 DEBUG : X server has signalled a successful start. 09:01:59,446 INFO : Starting window manager, pid 1345. 09:01:59,537 INFO : Starting graphical installation. 09:01:59,741 INFO : Detected 7968M of memory 09:01:59,741 INFO : Swap attempt of 7968M 09:02:00,840 INFO : ISCSID is /usr/sbin/iscsid 09:02:00,840 INFO : no initiator set Edit 2: This is the part of anaconda log that indicates that it found the USB etc: 09:01:47,918 INFO : starting STEP_STAGE2 09:01:47,918 INFO : partition is sda1, dir is //images/install.img 09:01:47,918 INFO : mounting device sda1 for hard drive install 09:01:48,005 INFO : Path to stage2 image is /mnt/isodir///images/install.img 09:01:54,214 INFO : mounted loopback device /mnt/runtime on /dev/loop0 as /tmp/install.img 09:01:54,214 INFO : Looking for updates for HD in /mnt/isodir///images/updates.img 09:01:54,214 INFO : Looking for product for HD in /mnt/isodir///images/product.img 09:01:54,227 INFO : got stage2 at url hd:sda1:///images/install.img 09:01:54,254 INFO : Loading SELinux policy 09:01:54,700 INFO : getting ready to spawn shell now 09:01:54,975 INFO : Running anaconda script /usr/bin/anaconda 09:01:56,882 INFO : _Fedora is the highest priority installclass, using it 09:01:56,921 INFO : Running kickstart %%pre script(s) 09:01:56,922 WARNING : '/bin/sh' specified as full path 09:01:56,926 INFO : All kickstart %%pre script(s) have been run

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • External USB HD issues with a twist (works on Windows7 but not XP)

    - by Eruditass
    I have this older external USB HD, 160 GB. I was using it to copy my Steam games to another computer. On the source computer, Windows 7 64-bit, everything worked fine. Drive reported no errors, had no hiccups, etc. Plugging it into the Windows XP 32-bit computer, it worked fine for looking through the files, moving files around on it (no real reading/writing, just modifying the filesystem table). However, when copying files from it to my internal HD, after a couple seconds to tens of minutes (seemingly random times), the USB device becomes unrecognized and it reports a delayed write error. Events in system log go like this, chronologically: (number times displayed)xSource (Event ID): "message" 2xdisk (51): An error was detected on device \Device\Harddisk1\D during a paging operation. 1xftdisk (57): The system failed to flush data to the transaction log. Corruption may occur. 1xApplication popup (26): Windows - Delayed Write Failed : Windows was unable to save all the data for the file E:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. 1xntfs (50): {Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. These repeat for a while, then there is 10+ disk messages or ftdisk messages. Other notes: This occurs on random files at random times. This problem cannot be replicated on the Windows 7 source machine when copying from the HD to a different location on its local disk chkdsk /f was run and found no errors. chkdsk /f/r has the delayed write issue. drive was set to quick removal. Setting to performance in device manager yielded same result I am not writing anything to the USB external drive, so I am not sure why there is even a delayed write error (writing file access times?) local Windows XP was chkdsk'd without problems Windows XP machine has no problems with other USB HD's Various USB ports were attempted Rebooting did not help Occurs with SyncToy as well as windows explorer SMART status is good on both local drive and the external one Lack of gaming is making me cranky

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • My hosting server giving memory allocation error

    - by Usman
    I have hosted my website on shared hosting linux server. As there are atmost 10-15 visitors come to my website daily but My wordpress website most of the times gives 500 Internal Server Error. I accessed my server Error Log following error is showing: [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:32 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:31 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:26 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php Is my hosting service really bad. And any solution. Thanks in advance.

    Read the article

  • What presentation software suits my needs?

    - by claws
    Background: I'm teaching biology to 12th grade students. The syllabus I'm teaching is huge. I mean literally, very huge. There is a lot for students to remember. There are no less than 1000 facts (weird names, dates etc) for students to remember. They'll have to remember all of them, they don't have a choice. The notes I compiled for their learning itself is upto 80 printed pages(Just the bullet outline & facts). That's just one chapter. We have 34 chapters. Also my students are very hardworking, they study upto 8-10hrs per day (Yeah! we are from India :). So, I want to ensure maximum retaining by the students at each and every stage (Teaching & Learning). I'm trying to as many memory training techniques as possible. I'm trying to incorporate, mnemonics, strong visual aids (pictures, 3D-animations, real videos etc.), spaced repetition etc. I think MS powerpoint is not suitable for my needs: There are about 200 slides per chapter. Its very easy for students to get lost while teaching. Because the problem with powerpoint is that it gives facts (as bullets) but it doesn't exploit the association & organization (Concept Map) of the content, which helps students learn quickly. I found an amazing software called XMind. You can see the screenshot here. Problem is that it is not as powerpoint in terms of powerpoint. This software can be used for just for concept maps. In the above screenshot, each topic occupies a single slide. I have an Image/picture(Detailed huge picture) and about 5-10 bullet points and probably a video or an animation of somethings. And this XMind is not good at presenting, in terms that it doesn't allow me to set what to present after what. I want to present a top down view, with a slide for each topic. PS: I Don't like prezi.com. I tried but it simply is too confusing for my students. It zooms here and there. I didn't tried it but I've seen few presentations.

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • Why many applications close after opening a document or doing a specific actions?

    - by Mohsen Farjami
    I have some encrypted pdf files that have no problem and in my last windows, I could open them easily with Adobe Reader 9.2 and other pdf readers. But now, I can only open non-encrypted pdf files and one encrypted file with Adobe Reader. every time I open almost any encrypted pdf, it closes itself. Also, when I try to search a folder for a keyword with Foxit Reader, once it closed. This is not related to Adobe Reader, because I have the same problem with Word 2007. When I open a document, sometimes it closes instantly and sometimes it closes after a few seconds and sometimes it is stable. My windows is Fresh. I have installed it a few days ago. I have ESET Smart Security 5.2 and I have updated it today. OS: XP Pro SP3, RAM: 3 GB, CPU: 2 GHZ, HDD: 320 GB My installed applications: Adobe AIR Adobe Flash Player 11 ActiveX Adobe Flash Player 11 Plugin Adobe Photoshop CS4 Adobe Reader 9.2 Atheros Wireless LAN Client Adapter Babylon Bluetooth Stack for Windows by Toshiba CCleaner Conexant HD Audio Dell Touchpad ESET Smart Security Farsi (101) Custom Foxit Reader Framing Studio 3.27 Google Chrome Hard Disk Sentinel PRO HDAUDIO Soft Data Fax Modem with SmartCP Intel(R) Graphics Media Accelerator Driver IrfanView (remove only) Java(TM) 6 Update 18 K-Lite Mega Codec Pack 8.8.0 Microsoft .NET Framework 2.0 Service Pack 1 Microsoft .NET Framework 3.0 Service Pack 1 Microsoft .NET Framework 3.5 Microsoft Data Access Components KB870669 Microsoft Office 2007 Primary Interop Assemblies Microsoft Office Enterprise 2007 Microsoft User-Mode Driver Framework Feature Pack 1.0.0 (Pre-Release 5348) Mozilla Firefox 7.0.1 (x86 en-US) Notepad++ Office Tab FreeEdition 8.50 ParsQuran PerfectDisk 12 Professional Registry First Aid RICOH R5C83x/84x Flash Media Controller Driver Ver.3.54.06 Sahar Money Manager 2.5 Stickies 7.1d The KMPlayer (remove only) TurboLaunch 5.1.2 Unlocker 1.9.1 USB Safely Remove 4.2 Virastyar Visual Studio 2005 Tools for Office Second Edition Runtime Winamp Windows Internet Explorer 8 Windows Media Player 11.0.5358.4826 Windows XP Service Pack 3 WinRAR 4.11 (32-bit) WorkPause 1.2 Z Dictionary My startup applications: WorkPause USB Safely Remove TurboLaunch SunJavaUpdateSched Stickies rfagent Persistence ParsQuran Daily Verse ITSecMng IgfxTray HotKeysCmds Hard Disk Sentinel egui disable shift+delete CTFMON.EXE Bluetooth Manager Babylon Client Apoint AdobeCS4ServiceManager Adobe Reader Speed Launcher Adobe ARM What should I do to solve it? If you recommend installing Windows again, what guarantees that it won't happen again?

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >