Search Results

Search found 14483 results on 580 pages for 'major mode'.

Page 516/580 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Windows 8 disk errors

    - by wrongusername
    So yesterday, I forcibly restarted my Windows 8 PC. VMWare Workstation was having some trouble with the guest Linux Mint OS. It wasn't responding for some time, so I tried suspending it September 28th or perhaps even before. It wouldn't suspend -- I forgot what the window looked like, but all options in the power menu were disabled (i.e. "Shutdown," "Power Off," and options like that were all disabled). I eventually killed the VMWare application through Task Manager, though I was too lazy to hunt down the running virtual machine itself, and decided to kill it by just shutting down my PC entirely. The PC wouldn't shut down for quite some time after the monitor went blank, so I did a cold reset by holding the power button. I then powered it on again and Windows briefly gave me some message like "Search for KERNEL_STACK_INPAGE_ERROR." Windows then started diagnosing some problems and gave me the message, "Repairing disk errors. This might take over an hour to complete." That was yesterday night, and I went to sleep without waiting for it to finish. This morning, it said that the repair failed, and that the log was at C:\windows\system32\LogFiles\srt\srtTrail.txt (as I remember it -- I don't have the exact path I wrote down right now). It gave me some other options to troubleshoot, such as resetting Windows (files and settings still intact, but programs not installed through the app store will be erased). That didn't work (no error message given, I was just told it didn't work). I tried rebooting in safe mode, the same diagnosis process begins, except that this time it doesn't bother with the automatic repairs again. So I tried using the command prompt to try to see if my files are at least still there. I was on the X drive, and I couldn't cd to the C drive. I couldn't find my folder under Users (of course?), and couldn't find the srt folder under LogFiles either. I am not sure what to try next. I have backed up everything, but to the cloud, so if absolutely necessary I can start off with a fresh copy of Windows and restore all my data, though it would be a hassle. Any thoughts on what might be wrong or what I can try? My computer was purchased just this June, so the hard drive should still be pretty new.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Resetting root password on Fedora Core 3 - serial cable access only

    - by Sensible Eddie
    A little background: We have an old rackmount server running a customised version of Fedora, manufactured by a company called Navaho. The server is a TeamCAT, running some proprietary rubbish called Freedom2. We have to keep it going - the alternative is extraordinarily expensive, and the business is not likely to be running much longer to justify changing things. Through one means or another, it has fallen upon me to try and resolve our lack of root access. The previous admin has fallen under the proverbial bus, and nobody has any clue. We have no access to the root account for this server. ssh is running on the server, and there is one account admin that we can login with, however it has no permission to do anything (ironic...) The only other way into the server is with a null-modem serial cable. This works... up to a point. I can see the BIOS, I can see the post BIOS screen, and then I see "Starting grub", followed by another screen with about four lines of Linux information, but then it stops at that point. The server continues booting, and all services come online after around two minutes, but the serial terminal displays no more information. I understand it is possible to put Linux into "single user mode" to reset a root password, but I have no idea how to do this beyond trying to interrupt it at the grub stage listed above. When I have tried it just froze. It was almost like grub had appeared (since the server did not continue booting) but I couldn't see it on the serial terminal. Which made me think maybe the grub screen has some different serial settings? I don't know... it's the first time I've ever used serial for access! A friend of mine suggested trying to use a Fedora boot CD. We could boot from USB, so something along this approach is possible but again we still can only see what's going on with the serial terminal, so it might not be achievable. Does anyone have any suggestions for things I can try? I appreciate this is a bit of a long shot, but any assistance would be invaluable. *UPDATE 1 - 28/8/12 * - we will be making some attempts on this today and will post further details later!

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • How to auto advance a PowerPoint slide after an exit animation is over?

    - by joooc
    PowerPoint entrance animation set up with "Start: With Previous" starts right when a new slide is advanced. However, if you set up an exit animation in the same way, it doesn't start with a slide ending sequence. Instead, the "Start: On Click" trigger needs to be used and after your exit animation is over you still need one extra click just to advance to the next slide. Workarounds to this are obvious: create a duplicate slide, make your ending animations from the original slide being your starting animations on the duplicate slide and let them be followed with whatever you want or create a transition slide with those ending animations only and set up "Change Advance slide - Automatically after - [the time it takes your animations to finish]". These workarounds will make it work for your audience, visually. However, it has an impact on slide numbers you might need to adjust accordingly and/or duplicate content changes. If you are the only one creating and using your presentation, this might be just fine. But if you are creating a presentation in collaborative mode with three other people and don't even know who will be the presenter at the end, you can mess things up. Let's be specific: most of my slides have 0.2s fly in entrance animation applied to blocks of content coming from right, bottom or left. Advancing to the next slide I want them to fly out in another 0.2s exit animation being followed by new slide 0.2s fly in entrance animation of the new blocks. The swapping of the blocks should be triggered while advancing to the next slide, as usually. As mentioned, I'm not able to achieve this without one extra click between the slides. I wrote a VBA script that should start together with an exit animation and will auto advance a slide after 0.3s when the exit animation is over. That way I should get rid of those extra clicks which are needed right now. Sub nextslide() iTime = 0.3 Start = Timer While Timer < Start + iTime DoEvents Wend With SlideShowWindows(1).View .GotoSlide (ActivePresentation.SlideShowWindow.View.Slide.SlideIndex + 1) End With End Sub It works well when binded on a box, button or another object. But I can't make it run on a single click (anywhere on the slide) so that it could start together with the exit animation onclick trigger. Creating a big transparent rectangular shape over the whole slide and binding the macro on it doesn't help either. By clicking it you only get the macro running, exit animation is not triggered. Anyway, I don't want to bind the macro to any other workaround object but the slide itself. Anyone knows how to trigger a PowerPoint VBA script on slide onclick event? Anyone knows a secret setting that will make the exit animation work as expected i.e. animating right before exiting a slide while transitioning to the next one? Anyone knows how to beat this dragon? Thank you!

    Read the article

  • Possible HDD malfunction. Need help in diagnosing

    - by Protheus
    Today when using my PC as I did for almost 4 years I experienced the following: during opening new tab in Opera browser screen froze. Music (AIMP 3) continued to play for about 5 minutes and then stopped too. I tried Ctrl+Alt+Del, but win7 lock screen didn't appear. Caps\Scroll or Num locks didn't switch diodes on keyboard. I rebooted my PC and saw that BIOS suggests me to enter it's settings or load by default. I chose default. It don't see proper boot device (old faitful "insert proper boot" something). After second reboot it said that there is no ExpressGate installed (which i turned off in BIOS years ago). I went into BIOS setting to turn off ExpressGate and see configs: time was not set off, all hard drives present, temp and O.C. settings are nominal (no O.C.) I've inserted my Win7 install disk to try recovery. It did load awfully long (about few minutes) and didn't see current installation. PC was utilized in 24/7 mode for almost all these years. Hardware configuration: ASUS P5Q WS Core 2 Quad Q9300 (2.5GHz no O.C.) MSI geForce GTX 460 4x2 Gb GeIL EVO 2 (AFAIR) Seagate something 750Gb (4 years as system HDD 24/7) WD 1Tb (for random stuff, 5 y.o.) Hitachi 500Gb (for even more random stuff, 6 y.o.) NEC DVDRW (ALL DISKS ARE SATA) Cooler Master Silent Pro 700W Software: Windows 7 AND Kubuntu on the same drive with GRUB loader. Sorry I can't remember HDDs and can't see them right now, but I think their models aren't relevant anyway. My idea is that due to some system error or hard drive glitch i've wrecked my primary HDD's MBR. Nevertheless I don't exclude the possibility of other failure. May it's be that motherboard or it's SATA controller? Doubt it, because all drives are seen in BIOS and I could load from DVD. Maybe GRUB got bugged somehow, although I don't see how it's possible from Windows. But I did install KUbuntu from Windows (i wasn't myself then), maybe GRUB did write itself in some windows partition and got rewriteen in process? Right now I am at work with my flash drive with me and I need some advice how to fix MBR or to hear if it's not MBR. I'm going to buy new HDD (Hitachi 7k2000) because I think that my current HDD is compromised and it's unsafe to use it as system drive, especially 24/7.

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • Informaton of pendriver with libudv on linux

    - by Catanzaro
    I'm doing a little app in C that read the driver information of my pendrive: Plugged it and typed dmesg: [ 7676.243994] scsi 7:0:0:0: Direct-Access Lexar USB Flash Drive 1100 PQ: 0 ANSI: 0 CCS [ 7676.248359] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 7676.256733] sd 7:0:0:0: [sdb] 7831552 512-byte logical blocks: (4.00 GB/3.73 GiB) [ 7676.266559] sd 7:0:0:0: [sdb] Write Protect is off [ 7676.266566] sd 7:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 7676.266569] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285373] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285383] sdb: sdb1 [ 7676.298661] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.298667] sd 7:0:0:0: [sdb] Attached SCSI removable disk with "udevadm info -q all -n /dev/sdb" P: /devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb N: sdb W: 36 S: block/8:16 S: disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 S: disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: UDEV_LOG=3 E: DEVPATH=/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb E: MAJOR=8 E: MINOR=16 E: DEVNAME=/dev/sdb E: DEVTYPE=disk E: SUBSYSTEM=block E: ID_VENDOR=Lexar E: ID_VENDOR_ENC=Lexar\x20\x20\x20 E: ID_VENDOR_ID=05dc E: ID_MODEL=USB_Flash_Drive E: ID_MODEL_ENC=USB\x20Flash\x20Drive\x20 E: ID_MODEL_ID=a813 E: ID_REVISION=1100 E: ID_SERIAL=Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 E: ID_SERIAL_SHORT=AA5OCYQII8PSQXBB E: ID_TYPE=disk E: ID_INSTANCE=0:0 E: ID_BUS=usb E: ID_USB_INTERFACES=:080650: E: ID_USB_INTERFACE_NUM=00 E: ID_USB_DRIVER=usb-storage E: ID_PATH=pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: ID_PART_TABLE_TYPE=dos E: UDISKS_PRESENTATION_NOPOLICY=0 E: UDISKS_PARTITION_TABLE=1 E: UDISKS_PARTITION_TABLE_SCHEME=mbr E: UDISKS_PARTITION_TABLE_COUNT=1 E: DEVLINKS=/dev/block/8:16 /dev/disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 /dev/disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 and my software is: Codice: Seleziona tutto #include <stdio.h> #include <libudev.h> #include <stdlib.h> #include <locale.h> #include <unistd.h> int main(void) { struct udev_enumerate *enumerate; struct udev_list_entry *devices, *dev_list_entry; struct udev_device *dev; /* Create the udev object */ struct udev *udev = udev_new(); if (!udev) { printf("Can't create udev\n"); exit(0); } enumerate = udev_enumerate_new(udev); udev_enumerate_add_match_subsystem(enumerate, "scsi_generic"); udev_enumerate_scan_devices(enumerate); devices = udev_enumerate_get_list_entry(enumerate); udev_list_entry_foreach(dev_list_entry, devices) { const char *path; /* Get the filename of the /sys entry for the device and create a udev_device object (dev) representing it */ path = udev_list_entry_get_name(dev_list_entry); dev = udev_device_new_from_syspath(udev, path); /* usb_device_get_devnode() returns the path to the device node itself in /dev. */ printf("Device Node Path: %s\n", udev_device_get_devnode(dev)); /* The device pointed to by dev contains information about the hidraw device. In order to get information about the USB device, get the parent device with the subsystem/devtype pair of "usb"/"usb_device". This will be several levels up the tree, but the function will find it.*/ dev = udev_device_get_parent_with_subsystem_devtype( dev, "block", "disk"); if (!dev) { printf("Errore\n"); exit(1); } /* From here, we can call get_sysattr_value() for each file in the device's /sys entry. The strings passed into these functions (idProduct, idVendor, serial, etc.) correspond directly to the files in the directory which represents the USB device. Note that USB strings are Unicode, UCS2 encoded, but the strings returned from udev_device_get_sysattr_value() are UTF-8 encoded. */ printf(" VID/PID: %s %s\n", udev_device_get_sysattr_value(dev,"idVendor"), udev_device_get_sysattr_value(dev, "idProduct")); printf(" %s\n %s\n", udev_device_get_sysattr_value(dev,"manufacturer"), udev_device_get_sysattr_value(dev,"product")); printf(" serial: %s\n", udev_device_get_sysattr_value(dev, "serial")); udev_device_unref(dev); } /* Free the enumerator object */ udev_enumerate_unref(enumerate); udev_unref(udev); return 0; } the problem is that i obtain in output: Device Node Path: /dev/sg0 Errore and dont view information. subsystem and the devtype i think that are inserted well : "block" and "disk". thanks for help. Bye

    Read the article

  • How to move partition with Users to a new SATA controller?

    - by Al Kepp
    Our current situation: X79 chipset. Windows 7 installed on SATA SSD running on Marvell controller. C:\Users contains just Public and computer-name folders, all the other users are created in E:\Users. That partition is on two HDD's running on an Intel RAID controller. (The motherboard has got two SATA/RAID onboard controllers.) The goal is: Without reinstallation, I want to move SSD to a standard SATA controller. Then I want to get rid of RAID1, and move one HDD to the same internal Intel SATA controller and the current E:\Users move to that new place. I want it to stay as E:\users, but I need to reinstall the HDD to let it work in SATA mode without RAID. So I face several problems. I am sure all are solvable with free software utilities, but I don't know how exactly to do it. I can see the particular problems: I have got all users at E:\Users. When I turn off that E:\ disk, I won't be able to login to Windows. I need at least one admin account to be placed back at C:\users. The current C: runs on standard 120 GB SSD, but it is connected to a Marvell SATA/RAID controller. I am affraid the Windows won't let me put it to Intel SATA controller due to hardware/licence check, and I won't be able to use standard W7 recovery disk either, because there probably isn't marvel SATA/RAID driver on it. I haven't tried anything yet, because I am affraid I can end up with computer not working at all. (I want to move it to a standard ICH10 Intel SATA controller to let us have no problems in future with it. I think it is not very safe when we use any nonstandard hardware to boot the computer.) I need to somehow backup current E:\ disk and restore it to a new E:. I hope this will be the easy part (as long as the admin account will reside on C:). The E: RAID array is very large, but it is almost empty (less than 100 GB of data.) So I can make the partition smaller so it can easily fit to a single SATA HDD.

    Read the article

  • Why isn't passwordless ssh working?

    - by Nelson
    I have two Ubuntu Server machines sitting at home. One is 192.168.1.15 (we'll call this 15), and the other is 192.168.1.25 (we'll call this 25). For some reason, when I want to setup passwordless login from 15 to 25, it works like a champ. When I repeat the steps on 25, so that 25 can login without a password on 15, no dice. I have checked both sshd_config files. Both have: RSAAuthentication yes PubkeyAuthentication yes I have checked permissions on both servers: drwx------ 2 bion2 bion2 4096 Dec 4 12:51 .ssh -rw------- 1 bion2 bion2 398 Dec 4 13:10 authorized_keys On 25. drwx------ 2 shimdidly shimdidly 4096 Dec 4 19:15 .ssh -rw------- 1 shimdidly shimdidly 1018 Dec 4 18:54 authorized_keys On 15. I just don't understand when things would work one way and not the other. I know it's probably something obvious just staring me in the face, but for the life of me, I can't figure out what is going on. Here's what ssh -v says when I try to ssh from 25 to 15: ssh -v -p 51337 192.168.1.15 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 192.168.1.15 [192.168.1.15] port 51337. debug1: Connection established. debug1: identity file /home/shimdidly/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/shimdidly/.ssh/id_rsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/shimdidly/.ssh/id_dsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 54:5c:60:80:74:ab:ab:31:36:a1:d3:9b:db:31:2a:ee debug1: Host '[192.168.1.15]:51337' is known and matches the ECDSA host key. debug1: Found key in /home/shimdidly/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/shimdidly/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Offering DSA public key: /home/shimdidly/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/shimdidly/.ssh/id_ecdsa debug1: Next authentication method: password

    Read the article

  • Cannot get to configure Kerberos for Reporting Services

    - by Ucodia
    Context I am trying to configure Kerberos in the domain for double-hop authentication. So here are the machines and their respective roles: client01: Windows 7 as client dc01: Windows Server 2008 R2 as domain controller and dns server01: Windows Server 2008 R2 as reporting server (native mode) server02: Windows Server 2008 R2 as SQL Server database engine I want my client01 to connect to server01 and configure a data source that is located on server02 using Intergrated Security. So as NTLM cannot push credentials that far, I need to setup Kerberos to enable double-hop authentication. The reporting service is runned by the Network Service service account and is configured only with the RSWindowsNegotiate options for authentication. Issue I cannot get to pass my client01 credential to server02 when configuring the data source on server01. Therefore I get the error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. So I went on dc01 and delegated full trust for any service to server01 but it not fixed the problem. I want to notice that I did not configured any SPNs for server01 because Reporting Service is runned by Network Service and from what I read on the Internet, when Reporting Services is going up with Network Service, SPNs are automatically registered. My problem is that even if that I want to configure SPNs manually, I do not know where I have to set them up. On dc01 or on server01? So I went a bit further on the issue and tried to trace this problem. From my understanding of Kerberos, this is what should happen on the network when I try to connect the data source: client01 ---- AS_REQ ---> dc01 <--- AS_REP ---- client01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- client01 ---- AP_REQ ---> server01 <--- AP_REP ---- server01 ---- TGS_REQ ---> dc01 <--- TGS_REP ---- server01 ---- AP_REQ ---> server02 <--- AP_REP ---- So captured my local network with Wireshark, but whenever I try to configure my data source from client01 on server01 to pass my credentials to server02, my client never sends a AS_REQ or TGS_REQ to the KDC on dc01. Questions So does anyone can tell me if I should configure the SPNs and on which machine does it have to be configured? Also why client01 never request for a TGT or a TGS to my KDC. Do you think there is something going wrong with the DC role of dc01?

    Read the article

  • Can I read an Outlook (2003/2007) PST file in C#?

    - by Andy May
    Is it possible to read a .PST file using C#? I would like to do this as a standalone application, not as an Outlook addin (if that is possible). If have seen other SO questions similar to this mention MailNavigator but I am looking to do this programmatically in C#. I have looked at the Microsoft.Office.Interop.Outlook namespace but that appears to be just for Outlook addins. LibPST appears to be able to read PST files, but this is in C (sorry Joel, I didn't learn C before graduating). Any help would be greatly appreciated, thanks! EDIT: Thank you all for the responses! I accepted Matthew Ruston's response as the answer because it ultimately led me to the code I was looking for. Here is a simple example of what I got to work (You will need to add a reference to Microsoft.Office.Interop.Outlook): using System; using System.Collections.Generic; using Microsoft.Office.Interop.Outlook; namespace PSTReader { class Program { static void Main () { try { IEnumerable<MailItem> mailItems = readPst(@"C:\temp\PST\Test.pst", "Test PST"); foreach (MailItem mailItem in mailItems) { Console.WriteLine(mailItem.SenderName + " - " + mailItem.Subject); } } catch (System.Exception ex) { Console.WriteLine(ex.Message); } Console.ReadLine(); } private static IEnumerable<MailItem> readPst(string pstFilePath, string pstName) { List<MailItem> mailItems = new List<MailItem>(); Application app = new Application(); NameSpace outlookNs = app.GetNamespace("MAPI"); // Add PST file (Outlook Data File) to Default Profile outlookNs.AddStore(pstFilePath); MAPIFolder rootFolder = outlookNs.Stores[pstName].GetRootFolder(); // Traverse through all folders in the PST file // TODO: This is not recursive, refactor Folders subFolders = rootFolder.Folders; foreach (Folder folder in subFolders) { Items items = folder.Items; foreach (object item in items) { if (item is MailItem) { MailItem mailItem = item as MailItem; mailItems.Add(mailItem); } } } // Remove PST file from Default Profile outlookNs.RemoveStore(rootFolder); return mailItems; } } } Note: This code assumes that Outlook is installed and already configured for the current user. It uses the Default Profile (you can edit the default profile by going to Mail in the Control Panel). One major improvement on this code would be to create a temporary profile to use instead of the Default, then destroy it once completed.

    Read the article

  • Stop lazy loading or skip loading a property in NHibernate? Proxy cannot be serialized through WCF

    - by HelloSam
    Consider I have a parent, child relationship class and mapping. I am using NHibernate to read the object from the database, and intended to use WCF to send the object across the wire. Goal For reading the parent object, I want to selectively, at different execution path, decide when I would want to load the child object. Because I don't want to read more than what I needed. Those partially loaded object must be able to sent through WCF. When I mean I don't load it, neither side will access such property. Problem When such partially loaded object is being sent through WCF, as those property is marked as [DataContract], it cannot be serialized as the property is lazy load proxy instead of real known type. What I want to archive, or solution that I can think of lazy=false or lazy=true doesn't work. Former will eagerly fetch all the relationships, latter will create a proxy. But I want nothing instead - a null would be the best. I don't need lazy load. I hope to get a null for those references that I don't want to fetch. A null, but not just a proxy. This will makes WCF happy, and waste less time to have a lazy-load proxy constructed. Like could I have a null proxy factory? -OR- Or making WCF ignoring those property that's a proxy instead of real. I tried the IDataContractSurrogate solution, but only parent is passed to GetObjectToSerialize, I never observe an proxy being passed through GetObjectToSerialize, leaving no chance to un-proxy it. Edit After reading the comments, more surfing on the Internet... It seems to me that DTO would shift major part of the computation to the server side. But for the project I am working on, 50% of time the client is "smarter" than the server and the server is more like a data store with validation and verification. Though I agree the server is not exactly dumb - I have to decide when to fetch the extra references already, and DTO will make this very explicit. Maybe I should just take the pain. I didn't know http://automapper.codeplex.com/ before, this motivates me a little more to take the pain. On the other hand, I found http://trentacular.com/2009/08/how-to-use-nhibernate-lazy-initializing-proxies-with-web-services-or-wcf/, which seems to be working with IDataContractSurrogate.GetObjectToSerialize.

    Read the article

  • FILESTREAM/FILETABLE Clarifications for Implementation

    - by user1209734
    Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues. A few things are not exactly clear to us: FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server). FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated. Can FILETABLE be referenced in another table with a foreign key? Thank you in advance EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270 In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment. EDIT 2: Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System. Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411 Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • How to assess a job offer with stock options? [closed]

    - by seas
    Cannot help expressing my curiosity about this issue. First of all, Russia, a country, I live in, as far as I understand, is a virgin land on this matter. Motivation is purely developed here and options are not existed at all. So, I am absolutely not aware about this. I roughly understand the mechanism of options. And, as far as I understand, the major white spot in the whole that story, how much will a singe share cost. How much will I get from all those N x 1000 options? I see two ways one can get money from the business: 1. Business goes IPO !!! 2. Business is sold as a whole to another owner. Next way is rather questionable about getting money: 3. Business goes without IPO "forever" (a generation would rather die before it IPO). I am also interested some explanations about situation ?3. Situation ?1 is clear - market decides everything, you either wait for stock price you satisfied or sell everything now. But topic is rather about ?2 - business is sold to another business. I am considering the following model: I am well payed specialist with company X. Somebody, with a company Y makes me an offer. Y is a startup. They cannot offer me much money and cannot overbid my salary, but they grow fast and hope to be bought soon. Instead of money, they offer me N x 1000 options. My problem is "how to assess this offer against my current, stable and well-payed job"? Are there any average cost of virtual share during selling one company to another? Are there any average stock price companies prefer to go to IPO? Are there any barriers against spreading the value of sock before selling the company or IPO (hiring too much people with options package too fast will decrease each package value, I mean)? Are there any good articles with explanations? Is all that somehow written in the law?

    Read the article

  • Memory Leak Issue in Weblogic, SUN, Apache and Oracle classes Options

    - by Amit
    Hi All, Please find below the description of memory leaks issues. Statistics show major growth in the perm area (static classes). Flows were ran for 8 hours , Heap dump was taken after 2 hours and at the end. A growth in Perm area was identified Statistics show from our last run 240MB growth in 6 hour,40mb growth every hour 2GB heap –can hold ¾ days ,heap will be full in ¾ days Heap dump show –growth in area as mentioned below JMS connection/session Area Apache org.apache.xml.dtm.DTM[] org.apache.xml.dtm.ref.ExpandedNameTable$ExtendedType org.jdom.AttributeList org.jdom.Content[] org.jdom.ContentList org.jdom.Element SUN * ConstantPoolCacheKlass * ConstantPoolKlass * ConstMethodKlass * MethodDataKlass * MethodKlass * SymbolKlass byte[] char[] com.sun.org.apache.xml.internal.dtm.DTM[] com.sun.org.apache.xml.internal.dtm.ref.ExtendedType java.beans.PropertyDescriptor java.lang.Class java.lang.Long java.lang.ref.WeakReference java.lang.ref.SoftReference java.lang.String java.text.Format[] java.util.concurrent.ConcurrentHashMap$Segment java.util.LinkedList$Entry Weblogic com.bea.console.cvo.ConsoleValueObject$PropertyInfo com.bea.jsptools.tree.TreeNode com.bea.netuix.servlets.controls.content.StrutsContent com.bea.netuix.servlets.controls.layout.FlowLayout com.bea.netuix.servlets.controls.layout.GridLayout com.bea.netuix.servlets.controls.layout.Placeholder com.bea.netuix.servlets.controls.page.Book com.bea.netuix.servlets.controls.window.Window[] com.bea.netuix.servlets.controls.window.WindowMode javax.management.modelmbean.ModelMBeanAttributeInfo weblogic.apache.xerces.parsers.SecurityConfiguration weblogic.apache.xerces.util.AugmentationsImpl weblogic.apache.xerces.util.AugmentationsImpl$SmallContainer weblogic.apache.xerces.util.SymbolTable$Entry weblogic.apache.xerces.util.XMLAttributesImpl$Attribute weblogic.apache.xerces.xni.QName weblogic.apache.xerces.xni.QName[] weblogic.ejb.container.cache.CacheKey weblogic.ejb20.manager.SimpleKey weblogic.jdbc.common.internal.ConnectionEnv weblogic.jdbc.common.internal.StatementCacheKey weblogic.jms.common.Item weblogic.jms.common.JMSID weblogic.jms.frontend.FEConnection weblogic.logging.MessageLogger$1 weblogic.logging.WLLogRecord weblogic.rjvm.BubblingAbbrever$BubblingAbbreverEntry weblogic.rjvm.ClassTableEntry weblogic.rjvm.JVMID weblogic.rmi.cluster.ClusterableRemoteRef weblogic.rmi.internal.CollocatedRemoteRef weblogic.rmi.internal.PhantomRef weblogic.rmi.spi.ServiceContext[] weblogic.security.acl.internal.AuthenticatedSubject weblogic.security.acl.internal.AuthenticatedSubject$SealableSet weblogic.servlet.internal.ServletRuntimeMBeanImpl weblogic.transaction.internal.XidImpl weblogic.utils.collections.ConcurrentHashMap$Entry Oracle XA Transaction oracle.jdbc.driver.Binder[] oracle.jdbc.driver.OracleDatabaseMetaData oracle.jdbc.driver.T4C7Ocommoncall oracle.jdbc.driver.T4C7Oversion oracle.jdbc.driver.T4C8Oall oracle.jdbc.driver.T4C8Oclose oracle.jdbc.driver.T4C8TTIBfile oracle.jdbc.driver.T4C8TTIBlob oracle.jdbc.driver.T4C8TTIClob oracle.jdbc.driver.T4C8TTIdty oracle.jdbc.driver.T4C8TTILobd oracle.jdbc.driver.T4C8TTIpro oracle.jdbc.driver.T4C8TTIrxh oracle.jdbc.driver.T4C8TTIuds oracle.jdbc.driver.T4CCallableStatement oracle.jdbc.driver.T4CClobAccessor oracle.jdbc.driver.T4CConnection oracle.jdbc.driver.T4CMAREngine oracle.jdbc.driver.T4CNumberAccessor oracle.jdbc.driver.T4CPreparedStatement oracle.jdbc.driver.T4CTTIdcb oracle.jdbc.driver.T4CTTIk2rpc oracle.jdbc.driver.T4CTTIoac oracle.jdbc.driver.T4CTTIoac[] oracle.jdbc.driver.T4CTTIoauthenticate oracle.jdbc.driver.T4CTTIokeyval oracle.jdbc.driver.T4CTTIoscid oracle.jdbc.driver.T4CTTIoses oracle.jdbc.driver.T4CTTIOtxen oracle.jdbc.driver.T4CTTIOtxse oracle.jdbc.driver.T4CTTIsto oracle.jdbc.driver.T4CXAConnection oracle.jdbc.driver.T4CXAResource oracle.jdbc.oracore.OracleTypeADT[] oracle.jdbc.xa.OracleXAResource$XidListEntry oracle.net.ano.Ano oracle.net.ns.ClientProfile oracle.net.ns.ClientProfile oracle.net.ns.NetInputStream oracle.net.ns.NetOutputStream oracle.net.ns.SessionAtts oracle.net.nt.ConnOption oracle.net.nt.ConnStrategy oracle.net.resolver.AddrResolution oracle.sql.CharacterSet1Byte we are using Oracle BEA Weblogic 9.2 MP3 JDK 1.5.12 Oracle versoin 10.2.0.4 (for oracle we found one path which is needed to applied to avoid XA transaction memory leaks). But we are stuck to resolve SUN, BEA Weblgogic and Apache leaks. please suggest... regards, Amit J.

    Read the article

  • sharepoint search is not working

    - by Nikkho
    Hi all, I have an issue with SharePoint search. The situation The server is installed with SharePoint on a farm with 2 servers. A new app pool is created and that app pool is using a domain account called moss_service. moss_service is set to be in the administrator group in both server. moss_service is also set to be the db_creator in the content database. When I checked it initially, the search's default content access account is using another different account, I changed that to be using moss_service account. I didn't do IIS reset because this is a production server, they dont want frequent iis reset. Strangely, checking the services.msc under "office sharepoint server search" the account is still using an old one. (and apparently it's only running on 1 server, the other server is not running) I then change that to the following: domain\moss_service with the password. and then I rerun the crawl. How do I diagnose the issue Basically everytime I change something I restart the crawl and then check the event viewer. Multiple things come out but the following is the major ones: The start address cannot be crawled. The password for the content access account cannot be decrypted because it was stored with different credentials. Re-type the password for the account used to crawl this content. (0x80042406) Performance monitoring cannot be initialized for the gatherer object, because the counters are not loaded or the shared memory object cannot be opened. This only affects availability of the perfmon counters. Restart the computer. Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (0x80041205) Crawl Logs Result The crawl log is showing this: The password for the content access account cannot be decrypted because it was stored with different credentials. Re-type the password for the account used to crawl this content. I tried changing it again at service.mstsc and the rerun the full crawl again but then it doesn't work. I have tried entering it using the following way: [email protected] and domain\moss_service My Questions are: How do I fix this? Is this the right way to setup the search? Does the search account has to be using a different domain account? Seemed like one fix complicates the other, how do I set this right? Is it worth it to upgrade to sp2?

    Read the article

  • CSS layout mystery

    - by selfthinker
    Among the many two (or three) column layout techniques I sometimes use the following one: <div class="variant1"> <div class="left1"> <div class="left2"> left main content </div> </div> <div class="right1"> <div class="right2"> right sidebar </div> </div> </div> together with: .variant1 .left1 { float: left; margin-right: -200px; width: 100%; } .variant1 .left1 .left2 { margin-right: 200px; } .variant1 .right1 { float: right; width: 200px; } This works in all major browsers. But for some very strange reason exactly the same technique but reversed doesn't work: <div class="variant2"> <div class="left1"> <div class="left2"> left main content </div> </div> <div class="right1"> <div class="right2"> right sidebar </div> </div> </div> with .variant2 .left1 { float: left; width: 200px; } .variant2 .right1 { float: right; margin-left: -200px; width: 100%; } .variant2 .right1 .right2 { margin-left: 200px; } In the second variant all text in the sidebar cannot be selected and all links cannot be clicked. This is at least true for Firefox and Chrome. In IE7 the links can at least be clicked and Opera seems completely fine. Does anyone know the reason for this strange behaviour? Is it a browser bug? Please note: I am not looking for a working two column CSS layout technique, I know there are loads of them. And I don't necessarily need this technique to work. I only like to understand the reason why the second variant behaves like it does. Here is a link to a small test page which should illustrate the problem: http://selfthinker.org/stuff/css_layout_mystery.html

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • Is there an algorithm for converting quaternion rotations to Euler angle rotations?

    - by Will Baker
    Is there an existing algorithm for converting a quaternion representation of a rotation to an Euler angle representation? The rotation order for the Euler representation is known and can be any of the six permutations (i.e. xyz, xzy, yxz, yzx, zxy, zyx). I've seen algorithms for a fixed rotation order (usually the NASA heading, bank, roll convention) but not for arbitrary rotation order. Furthermore, because there are multiple Euler angle representations of a single orientation, this result is going to be ambiguous. This is acceptable (because the orientation is still valid, it just may not be the one the user is expecting to see), however it would be even better if there was an algorithm which took rotation limits (i.e. the number of degrees of freedom and the limits on each degree of freedom) into account and yielded the 'most sensible' Euler representation given those constraints. I have a feeling this problem (or something similar) may exist in the IK or rigid body dynamics domains. Solved: I just realised that it might not be clear that I solved this problem by following Ken Shoemake's algorithms from Graphics Gems. I did answer my own question at the time, but it occurs to me it may not be clear that I did so. See the answer, below, for more detail. Just to clarify - I know how to convert from a quaternion to the so-called 'Tait-Bryan' representation - what I was calling the 'NASA' convention. This is a rotation order (assuming the convention that the 'Z' axis is up) of zxy. I need an algorithm for all rotation orders. Possibly the solution, then, is to take the zxy order conversion and derive from it five other conversions for the other rotation orders. I guess I was hoping there was a more 'overarching' solution. In any case, I am surprised that I haven't been able to find existing solutions out there. In addition, and this perhaps should be a separate question altogether, any conversion (assuming a known rotation order, of course) is going to select one Euler representation, but there are in fact many. For example, given a rotation order of yxz, the two representations (0,0,180) and (180,180,0) are equivalent (and would yield the same quaternion). Is there a way to constrain the solution using limits on the degrees of freedom? Like you do in IK and rigid body dynamics? i.e. in the example above if there were only one degree of freedom about the Z axis then the second representation can be disregarded. I have tracked down one paper which could be an algorithm in this pdf but I must confess I find the logic and math a little hard to follow. Surely there are other solutions out there? Is arbitrary rotation order really so rare? Surely every major 3D package that allows skeletal animation together with quaternion interpolation (i.e. Maya, Max, Blender, etc) must have solved exactly this problem?

    Read the article

  • JNDI / Classpath problem in glassfish

    - by Michael Borgwardt
    I am in the process of converting a large J2EE app from EJB 2 to EJB 3 (all stateless session beans, using glassfish 2.1.1), and running out of ideas. The first EJB I converted (let's call it Foo) ran without major problems (it was the only one in its ejb-module and I could completely replace the deployment descriptor with annotations) and the app ran fine. But after converting the second one (let's call it Bar, one of several in a different ejb-module) there is a weird combination of problems: The app deploys without errors (nothing in the logs either) There is an error when looking up Bar via JNDI When looking at the JNDI tree in the glassfish admin console, Bar is not present at all. Then when I look at the logs, I see this (Foo is the correct name of the EJB's remote interface of the first converted, previously working EJB): Caused by: javax.naming.NamingException: ejb ref resolution error for remote business interface Foo [Root exception is java.lang.ClassNotFoundException: Foo] at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425) at com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:414) at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:603) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more Caused by: java.lang.ClassNotFoundException: XXX at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1578) at com.sun.ejb.EJBUtils.getBusinessIntfClassLoader(EJBUtils.java:679) at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:348) ... 75 more This is followed by more exceptions for all the entries that (like Foo) do appear in the JNDI tree. These look like this: Caused by: javax.naming.NotContextException: BarHome cannot be listed at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:607) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more However, no exception for Bar, it does not appear in the log at all except one entry during deployment. The other EJBs in the same module do appear, as does Foo. Any ideas what could cause this or how to diagnose it further? The beans are pretty straightforward: @Stateless(name = "Foo") @RolesAllowed("FOOUSER") @TransactionAttribute(TransactionAttributeType.SUPPORTS) public class FooImpl extends BaseBean implements Foo { I'm also having some problems with the deployment descriptor for Bar (I'd like to eliminate it, but glassfish doesn't seem to like having a bean appear only in sun-ejb-jar.xml, or having some beans in a module declared in the descriptor and others use only annotations), but I can't see how that could cause the ClassNotFoundException on Foo. Is there a way to see the ClassPath that Glassfish is actually using?

    Read the article

  • How to crop or get a smaller size UIImage in iPhone without memory leaks?

    - by rkbang
    Hello all, I am using a navigation controller in which I push a tableview Controller as follows: TableView *Controller = [[TableView alloc] initWithStyle:UITableViewStylePlain]; [self.navigationController pushViewController:Controller animated:NO]; [Controller release]; In this table view I am using following two methods to display images: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; return UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } But when I pop the Controller, the memory being used is not released. Is there any leaks in the above code used to create and crop images. Also are there any efficient method to deal with images in iPhone. I am having a lot of images and facing major challeges in resolving the memory issues. tnx in advance.

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >