Search Results

Search found 2768 results on 111 pages for 'heap dump'.

Page 67/111 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to automount NTFS usb sticks on Xubuntu 12.10?

    - by netimen
    I'm running the Xubuntu 12.10 on a Lenovo T520 laptop. If I plug a FAT formatted usb stick, it's mounted automatically, but if I plug in a NTFS formatted one, I have to mount it manually. How to make NTFS usb sticks to mount automatically when plugged? My /etc/fstab in case it helps: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda5 during installation UUID=cd221c3e-44a8-459e-9dfb-04787f1cd0b6 none swap sw

    Read the article

  • frequent abnormal shutdowns/system crashes

    - by user110353
    It's been almost 5 days since I have installed Ubuntu and almost 6th time that my laptop has been crashed entirely and it shuts down abnormally. Actually, it heats up and I have to wait for 20 odd minutes before I can turn it on again. A message appears that my PC crashed due to overheating which may damage my hard disk. The crashes happened when I tried to open some application that freeze my PC not even giving me enough time to go to system monitor and end process. Sometimes the culprit application which caused crash is Ever-pad, sometime it's team-viewer, sometimes it's some other. This is something very serious. The last crash occurred at 09:14:40. Kindly click here to view system log. I want to stick to Ubuntu and the same laptop as I had serious issues with Windows and I nearly went out to dump my laptop and purchase a more powerful system. Below are my hw/os specs. Kindly advice on how to resolve this issue Ubuntu 12.10 Kernal 3.5.0-18-generic GNOME 3.6.0 Memory 2.0GB Processor: Genuine Intel CPU [email protected] x 2 Available Disk Space: 63.7 GB Thanks in advance

    Read the article

  • How to reset the language of the package descriptions

    - by xubuntix
    I have had German as my main language about a year ago. Later I changed it to English. Most parts of the system accepted the change. The notable exceptions are the package descriptions, which remain in German for some packages. You can see in the image (apt-cache and software-center), that while some descriptions are in English, some have remained in German. So the question is: how do I reset this? I guess that there is somewhere a description cache that needs to be told that it should update all descriptions? EDIT: As asked: the output of some language related commands: $ cat /etc/default/locale LANG="en_US.UTF-8" $ apt-config dump | grep Lang Acquire::Languages ""; Acquire::Languages:: "de_DE"; Acquire::Languages:: "de"; Acquire::Languages:: "en"; Acquire::Languages:: "none"; $ locale LANG=de_DE.UTF-8 LANGUAGE=en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC="de_DE.UTF-8" LC_TIME="de_DE.UTF-8" LC_COLLATE="de_DE.UTF-8" LC_MONETARY="de_DE.UTF-8" LC_MESSAGES="de_DE.UTF-8" LC_PAPER="de_DE.UTF-8" LC_NAME="de_DE.UTF-8" LC_ADDRESS="de_DE.UTF-8" LC_TELEPHONE="de_DE.UTF-8" LC_MEASUREMENT="de_DE.UTF-8" LC_IDENTIFICATION="de_DE.UTF-8" LC_ALL= As a note: I'm not sure what each entry means, but some of the de_DE.UTF-8 are probably ok, since I do want paper-sizes, monetary, time, etc. in standard German formats.

    Read the article

  • The Real Value Of Certification

    - by Brandye Barrington
    I read a quote recently by Rich Hein of CIO.com "Certifications are, like most things in life: The more you put into them, the more you will get out." This is what we tell candidates all the time. The real value in obtaining a certification is the time spent preparing for the exam. All the hours spent reading books, practicing in hands-on environments, asking questions and searching for answers is valuable. It's valuable preparation for the exam, but it's also valuable preparation for your future job role and for your career. If your goal is just to pass an exam, you've missed a very important part of the value of certification.We receive so many questions through different forms of social media on whether or not certification will help candidates get jobs or get better jobs. Surveys conducted by us and by independent entities all point to the job and salary benefits of certification. However, a key part of that equation is whether a candidate can actually perform successfully in a job role. If preparation time was used to practice and learn and master new skills rather than to memorize a brain dump, the candidate will probably perform successfully in their job role, and job opportunities and higher salary will likely follow. Candidates who do not show that initiative, will not likely reap the full benefits of certification.Keep this in mind as you approach your next certification exam. You are preparing for a career, not an exam. This may help you to be more appreciative of the long hours spent studying!

    Read the article

  • What credentials should I use to access a Windows share?

    - by JMCF125
    Hi, I have installed Samba and CIFS and all that, followed a bunch of tutorials, but still I can't access a share in the separate Windows 7 machine. Before I could access a share in Ubuntu from Windows, but although now I can't for whatever reason; the error of the attempt to mount the Windows share is the same: 13, asking for credentials (the computer with Windows is off now, but I can add the exact error message later). In /etc/fstab I have: # ... (help info) ... # <file system> <mount point> <type> <options> <dump> <pass> # ... (mounting points that don't matter for the question) ... //192.168.1.2/C\:/Users/Public/Documents /srv/Z\:/ cifs user=guest,password=,uid=1000,iocharset=utf8 0 0 I also tried options such as username=guest,uid=1000,iocharset=utf8 and guest,uid=1000,iocharset=utf8, which, of course, don't work. What user am I supposed to use? (user=user; username=user; my credentials in the Windows and Ubuntu machines do not work, at least with the syntax I tried - similar to this). Even if this worked it's not actually what I want. I wanted to setup an authentication for any one trying to access the drive (it's currently 777, for the Linux share as well) and put a limit/quota on the share's use (as I see Z:on Windows, it allows for the entire C:drive to be filled). Thank you in advance. I'd be glad if you suggested a way to do this even without the last paragraph.

    Read the article

  • How to mount drive in /media/userName/ like nautilus do using udisks

    - by Bsienn
    As of my current installation of Ubuntu 13.10 Unity, when i click on a drive in nautilus it get mounted in /media/username/mountedDrive i read that nautilus use udisks to do that. Basically i want to auto mount my drive using udisks in start up using this method But problem is, it mounts the drive in /media/mountedDrive, but i want it the way nautilus do in /media/username/mounteDrive I want NTFS Data drive to be auto mounted at /media/bsienn/ bsienn@bsienn-desktop:~$ blkid /dev/sda1: LABEL="System Reserved" UUID="8230744030743D6B" TYPE="ntfs" /dev/sda2: LABEL="Windows 7" UUID="60100EA5100E81F0" TYPE="ntfs" /dev/sda3: LABEL="Data" UUID="882C04092C03F14C" TYPE="ntfs" /dev/sda5: UUID="8768800f-59e1-41a2-9092-c0a8cb60dabf" TYPE="swap" /dev/sda6: LABEL="Ubuntu Drive" UUID="13ea474a-fb27-4c91-bae7-c45690f88954" TYPE="ext4" /dev/sda7: UUID="69c22e73-9f64-4b48-b854-7b121642cd5d" TYPE="ext4" bsienn@bsienn-desktop:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders, total 312500000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d528d52 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 117730069 58761611 7 HPFS/NTFS/exFAT /dev/sda3 158690072 312494116 76902022+ 7 HPFS/NTFS/exFAT /dev/sda4 117731326 158689279 20478977 5 Extended /dev/sda5 137263104 141260799 1998848 82 Linux swap / Solaris /dev/sda6 141262848 158689279 8713216 83 Linux /dev/sda7 117731328 137263103 9765888 83 Linux Partition table entries are not in disk order bsienn@bsienn-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda7 during installation UUID=69c22e73-9f64-4b48-b854-7b121642cd5d / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=8768800f-59e1-41a2-9092-c0a8cb60dabf none swap sw 0 0 Desired effect: Picture link

    Read the article

  • Drupal migration failed

    - by Marco
    First of all, I'm new to Drupal and the work I have to do is some kind of too hard. My old colleague (webmaster) had a server with a multisite Drupal 6 installation. Sites and their dirs were (e.g.) Sites Site directory b.a.mycompany.com /drupal_install_dir/sites/b.a.mycompany.com c.a.mycompany.com /drupal_install_dir/sites/c.a.mycompany.com d.a.mycompany.com /drupal_install_dir/sites/d.a.mycompany.com Unluckily my colleague moved and server hdd aren't in my hands: all I have is a backup of /drupal_install_dir and three sql dumps (one for each site). I had to restore three sites, but changing them as z.mycompany.com/b z.mycompany.com/c z.mycompany.com/d Beeing a sysadmin, I Extracted tar.gz backup file under wwwroot (let's call full path to extracted directory /new_install_dir) Restored three databases Created mysql users and give them correct GRANTS on databases Then (trying to restore at least first site) I changed /new_install_dir/sites/settings.php putting correct database connection data and new basepath. But there is no way I can see my new site, simply it doesn't work. Watching /var/log/apache2/error.log I saw Drupal searching for main drupal database; so I created that db too setting user and grants, but dump file is empty. Well, now I can run something like install.php or update.php, but my site is not shown. Is there something I can do? Do I have to walk another way? Consider I searched the web, but I'm not able to find a guide that can help me for my problem. Ah, I forgot: before producing the backup, my colleague set site in maintenance mode. When I try to run z.mycompany.com/?q=user (trying to login) nothing happens. I'm really stuck...

    Read the article

  • How to set individual NTFS partitions permissions behaviour for each user account?

    - by ryniek
    I have two NTFS partitions (DOWNLOADS for downloaded files and VM for my VirtualBox .vdi file) for which i must have full permissions for my allday use account. They should be also automounting when i login to this account. But i've also set Guest account for guests. For Guest account, i want make VM partition fully disabled and invisible (and thus it mustn't automount) but DOWNLOADS partition should be shared with limited privileges with Guest account. Editing fstab i'm able to share DOWNLOADS partition with Guest on limited privileges but VM can be only set to limited and have disabled automounting - so guest can't mount it but it still can be seen in Nautilus, plus I must always mount it manually when i login to allday account. Is there some trick to make what i want? Here's my fstab config: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda1 : UUID=35e66658-5ee9-40cf-bf56-8204959e3df0 / xfs defaults 01 #Entry for /dev/sda2 : UUID=26c714cf-4236-45e7-9c46-cfcf91a215ae /home xfs defaults 02 #Entry for /dev/sda5 : UUID=1315BCB027C44639 /media/DOWNLOADS ntfs-3g auto,uid=1000,gid=1000,umask=0022,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda6 : UUID=60FF39EB72B72264 /media/VM ntfs noauto,uid=1000,gid=1000,umask=0077,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda7 : UUID=c52411f5-105c-45d1-971f-412f962c350e none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks

    Read the article

  • Installation on SSD with Windows preinstalled

    - by ebbot
    I bought a laptop with this fancy SSD drive, fancy new UEFI aso. I figured at first Windows out Ubuntu in but after doing 3 DoA on 3 laptops in one day I realized that maybe keeping Windows could come in handy. So dual boot it is. And this is what I've got: Disk 1 - 500 Gb HD 300 Mb Windoze only says "Healthy" don't know what it's for. 600 Mb "Healthy (EFI partition)" 186.30 Gb NTFS "OS (C:)" "Healthy (Boot, Page File, Crash Dump, Primary Partition)" 258.45 Gb NTFS "Data (D:)" "Healthy" 20.00 Gb "Healthy (Recovery Partition)" Disk 2 - 24 Gb SSD 4.00 Gb "Healthy (OEM Partition)" 18.36 Gb "Healthy (Primary Partition)" So I'm not sure what the first partition on each drive does (the 300 Gb on the HD and the OEM Partition on the SSD. Nor do I know what Data (D:). I think the 2nd partition on the SSD is for some speedup of Windoze. I'm debating if I should shrink the OS (C:) drive to around 120 GB or so. Clear the Data (D:) and also use the whole SSD for Ubuntu. That would leave me 24 Gb for e.g. / on the SSD and some 320 Gb on the HD for /home and swap. Is this a reasonable setup? Do I need to configure fstab for the SSD differently to a HD?

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • Unable to mount location ubuntu 12.10

    - by Rajesh
    I'm new to Ubuntu. I installed Ubuntu 12.10 replacing windows. Now I'm getting Unable to mount location error while opening the drive. $ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=5fa63194-c19e-4117-95c6-679eb6453d3b / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=70f1ec8d-aa45-4de7-a206-747dccd2472b none swap sw 0 0 $ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001f10f Device Boot Start End Blocks Id System /dev/sda1 * 2048 970561535 485279744 83 Linux /dev/sda2 970563582 976771071 3103745 5 Extended /dev/sda5 970563584 976771071 3103744 82 Linux swap / Solaris

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • JRockit R28/JRockit Mission Control 4.0 is out!

    - by Marcus Hirt
    The next major release of JRockit is finally out! Here are some highlights: Includes the all new JRockit Flight Recorder – supersedes the old JRockit Runtime Analyser. The new flight recorder is inspired by the “black box” in airplanes. It uses a highly efficient recording engine and thread local buffers to capture data about the runtime and the application running in the JVM. It can be configured to always be on, so that whenever anything “interesting” happens, data can be dumped for some time back. Think of it as your own personal profiling time machine. Automatic shortest path calculation in Memleak – no longer any need for running around in circles when trying to find your way back to a thread root from an instance. Memleak can now show class loader related information and split graphs on a per class loader basis. More easily configured JMX agent – default port for both RMI Registry and RMI Server can be configured, and is by default the same, allowing easier configuration of firewalls. Up to 64 GB (was 4GB) compressed references. Per thread allocation profiling in the Management Console. Native Memory Tracking – it is now possible to track native memory allocations with very high resolution. The information can either be accessed using JRCMD, or the dedicated Native Memory Tracking experimental plug-in for the Management Console (alas only available for the upcoming 4.0.1 release). JRockit can now produce heap dumps in HPROF format. Cooperative suspension – JRockit is no longer using system signals for stopping threads, which could lead to hangs if signals were lost or blocked (for example bad NFS shares). Now threads check periodically to see if they are suspended. VPAT/Section 508 compliant JRMC – greatly improved keyboard navigation and screen reader support. See New and Noteworthy for more information. JRockit Mission Control 4.0.0 can be downloaded from here: http://www.oracle.com/technology/software/products/jrockit/index.html <shameless ad> There is even a book to go with JRMC 4.0.0/JRockit R28! http://www.packtpub.com/oracle-jrockit-the-definitive-guide/book/ </shameless ad>

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

  • SQL SERVER – Identify Numbers of Non Clustered Index on Tables for Entire Database

    - by pinaldave
    Here is the script which will give you numbers of non clustered indexes on any table in entire database. SELECT COUNT(i.TYPE) NoOfIndex, [schema_name] = s.name, table_name = o.name FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name ORDER BY schema_name, table_name Here is the small story behind why this script was needed. I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com. I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference. Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Mysql 5.5 server not working

    - by rajesh
    I had Ubuntu 14.04 installed on my system. I recently updated ubuntu and now my mysql does not start and workbench says that mysql server has been stopped. And when i try to start it gives me the following error 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running 2014-08-12 23:02:04 - Server start done. 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running And also when i try to login using terminal (mysql -u root -p <password>) i get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I have also tried to reinstall Ubuntu but i am unable to do so. Gives me the following error: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server-5.5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. I have data which i have not taken backup of as i am unable to log into the server. I am a newbie please help me resolve this issue without losing my data. Awaiting for your earliest response. Below is the error message from cat /var/log/mysql/error.log 140813 21:22:50 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140813 21:22:50 [Note] Plugin 'FEDERATED' is disabled. 140813 21:22:50 InnoDB: The InnoDB memory heap is disabled 140813 21:22:50 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140813 21:22:50 InnoDB: Compressed tables use zlib 1.2.8 140813 21:22:50 InnoDB: Using Linux native AIO 140813 21:22:50 InnoDB: Initializing buffer pool, size = 128.0M 140813 21:22:50 InnoDB: Completed initialization of buffer pool 140813 21:22:50 InnoDB: highest supported file format is Barracuda. 140813 21:22:50 InnoDB: Waiting for the background threads to start 140813 21:22:51 InnoDB: 5.5.38 started; log sequence number 80726593570 140813 21:22:51 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 140813 21:22:51 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 140813 21:22:51 [Note] Server socket created on IP: '127.0.0.1'. 140813 21:22:51 [ERROR] Fatal error: Can't open and lock privilege tables: Incorrect file format 'user'

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • Aptronyms: fitting the profession to the name

    - by Tony Davis
    Writing a recent piece on the pains of index fragmentation, I found myself wondering why, in SQL Server, you can’t set the equivalent of a fill factor, on a heap table. I scratched my head…who might know? Phil Factor, of course! I approached him with a due sense of optimism only to find that not only did he not know, he also didn’t seem to care much either. I skulked off thinking how this may be the final nail in the coffin of nominative determinism. I’ve always wondered if there was anything in it, though. If your surname is Plumb or Leeks, is there even a tiny, extra percentage chance that you’ll end up fitting bathrooms? Some examples are quite common. I’m sure we’ve all met teachers called English or French, or lawyers called Judge or Laws. I’ve also known a Doctor called Coffin, a Urologist called Waterfall, and a Dentist called Dentith. Two personal favorites are Wolfgang Wolf who ended up managing the German Soccer team, Wolfsburg, and Edmund Akenhead, a Crossword Editor for The Times newspaper. Having forgiven Phil his earlier offhandedness, I asked him for if he knew of any notable examples. He had met the famous Dr. Batty and Dr. Nutter, both Psychiatrists, knew undertakers called Death and Stiff, had read a book by Frederick Page-Turner, and suppressed a giggle at the idea of a feminist called Gurley-Brown. He even managed to better my Urologist example, citing the article on incontinence in the British Journal of Urology (vol.49, pp.173-176, 1977) by A. J. Splatt and D. Weedon. What, however, if you were keen to gently nudge your child down the path to a career in IT? What name would you choose? Subtlety probably doesn’t really work, although in a recent interview, Rodney Landrum did congratulate PowerShell MVP Max Trinidad on being named after a SQL function. Grant “The Memory” Fritchey (OK, I made up that nickname) doesn’t do badly either. Some surnames, seem to offer a natural head start, although I know of no members of the Page-Reid clan in the profession. There are certainly families with the Table surname, although sadly, Little Bobby Tables was merely a legend by xkcd. A member of the well-known Key family would need to name their son Primary, or maybe live abroad, to make their mark. Nominate your examples of people seemingly destined, by name, for their chosen profession (extra points for IT). The best three will receive a prize. Cheers, Tony.

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Why enumerator structs are a really bad idea (redux)

    - by Simon Cooper
    My previous blog post went into some detail as to why calling MoveNext on a BCL generic collection enumerator didn't quite do what you thought it would. This post covers the Reset method. To recap, here's the simple wrapper around a linked list enumerator struct from my previous post (minus the readonly on the enumerator variable): sealed class EnumeratorWrapper : IEnumerator<int> { private LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } If you have a look at the Reset method, you'll notice I'm having to cast to IEnumerator to be able to call Reset on m_Enumerator. This is because the implementation of LinkedList<int>.Enumerator.Reset, and indeed of all the other Reset methods on the BCL generic collection enumerators, is an explicit interface implementation. However, IEnumerator is a reference type. LinkedList<int>.Enumerator is a value type. That means, in order to call the reset method at all, the enumerator has to be boxed. And the IL confirms this: .method public hidebysig newslot virtual final instance void Reset() cil managed { .maxstack 8 L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator<int32> EnumeratorWrapper::m_Enumerator L_0007: box [System]System.Collections.Generic.LinkedList`1/Enumerator<int32> L_000c: callvirt instance void [mscorlib]System.Collections.IEnumerator::Reset() L_0011: nop L_0012: ret } On line 0007, we're doing a box operation, which copies the enumerator to a reference object on the heap, then on line 000c calling Reset on this boxed object. So m_Enumerator in the wrapper class is not modified by the call the Reset. And this is the only way to call the Reset method on this variable (without using reflection). Therefore, the only way that the collection enumerator struct can be used safely is to store them as a boxed IEnumerator<T>, and not use them as value types at all.

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • Error Installing MS office in ubuntu 13.04

    - by Birendra
    While I am installing ms office 10 or 13 using wine it says the following: Unhandled exception: 0xc06d007e in 32-bit code (0x7b83ae0b). Register dump: CS:0023 SS:002b DS:002b ES:002b FS:0063 GS:006b EIP:7b83ae0b ESP:0a6cd3f8 EBP:0a6cd45c EFLAGS:00000287( - -- I S - -P-C) EAX:7b826449 EBX:7b8b0000 ECX:0a6cd480 EDX:0a6cd41c ESI:00dd2428 EDI:00000000 Stack dump: 0x0a6cd3f8: 0a6cd4d0 00000004 000a0009 c06d007e 0x0a6cd408: 00000000 00000000 7b83ae0b 00000001 0x0a6cd418: 0a6cd480 7b8589db 7ffd0c00 00000000 0x0a6cd428: 00000000 00000000 00000000 00000000 0x0a6cd438: 00000000 7ffd0c00 00000000 7b8b0000 0x0a6cd448: 0a6cd468 7b858b2e 00dd24c0 00000000 Backtrace: =>0 0x7b83ae0b in kernel32 (+0x2ae0b) (0x0a6cd45c) 1 0x00dc93bb in msi7bec.tmp (+0x493ba) (0x0a6cd4c4) 2 0x00dc78d8 in msi7bec.tmp (+0x478d7) (0x0a6cd704) 3 0x00dc28cd in msi7bec.tmp (+0x428cc) (0x0a6cd940) 4 0x00d9caf8 in msi7bec.tmp (+0x1caf7) (0x0a6ce83c) 5 0x7def9393 CUSTOMPROC_wrapper+0xa() in msi (0x0a6ce848) 6 0x7def9671 CUSTOMPROC_wrapper+0x2e8() in msi (0x0a6ce9a8) 7 0x7def994f CUSTOMPROC_wrapper+0x5c6() in msi (0x0a6ce9f8) 8 0x7bc7f84c call_thread_func_wrapper+0xb() in ntdll (0x0a6cea08) 9 0x7bc7f89b call_thread_func+0x44() in ntdll (0x0a6ceae8) 10 0x7bc7f82a in ntdll (+0x6f829) (0x0a6ceb08) 11 0x7bc871f3 in ntdll (+0x771f2) (0x0a6cf368) 12 0xf75c5d78 start_thread+0xd7() in libpthread.so.0 (0x0a6cf468) 13 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 14 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 15 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 16 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 17 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 18 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 19 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 20 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 21 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 22 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 23 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 24 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 25 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 26 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 27 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 28 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 29 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 30 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 31 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 32 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 33 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 34 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 35 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 36 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 37 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 38 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 39 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 40 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 41 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 42 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 43 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 44 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 45 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 46 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 47 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 48 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 49 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 50 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 51 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 52 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 53 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 54 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 55 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 56 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 57 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 58 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 59 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 60 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 61 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 62 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 63 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 64 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 65 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 66 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 67 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 68 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 69 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 70 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 71 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 72 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 73 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 74 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 75 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 76 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 77 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 78 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 79 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 80 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 81 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 82 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 83 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 84 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 85 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 86 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 87 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 88 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 89 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 90 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 91 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 92 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 93 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 94 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 95 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 96 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 97 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 98 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 99 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 100 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 101 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 102 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 103 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 104 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 105 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 106 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 107 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 108 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 109 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 110 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 111 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 112 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 113 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 114 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 115 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 116 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 117 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 118 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 119 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 120 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 121 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 122 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 123 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 124 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 125 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 126 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 127 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 128 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 129 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 130 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 131 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 132 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 133 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 134 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 135 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 136 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 137 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 138 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 139 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 140 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 141 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 142 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 143 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 144 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 145 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 146 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 147 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 148 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 149 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 150 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 151 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 152 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 153 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 154 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 155 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 156 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 157 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 158 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 159 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 160 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 161 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 162 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 163 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 164 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 165 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 166 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 167 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 168 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 169 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 170 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 171 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 172 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 173 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 174 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 175 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 176 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 177 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 178 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 179 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 180 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 181 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 182 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 183 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 184 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 185 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 186 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 187 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 188 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 189 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 190 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 191 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 192 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 193 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 194 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 195 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 196 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 197 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 198 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 199 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 200 0xf74fc3de __clone+0x5d() in libc.so.6 (0x00000000) 0x7b83ae0b: subl $4,%esp Modules: Module Address Debug info Name (149 modules) PE 840000- 86f000 Deferred osetupui PE ba0000- ba7000 Deferred msi7c0d.tmp PE d40000- d51000 Deferred msi7bb6.tmp PE d80000- ddd000 Export msi7bec.tmp PE de0000- df8000 Deferred msi83ed.tmp PE e00000- e0a000 Deferred msi83f8.tmp PE f40000- 1072000 Deferred pidgenx PE 1440000- 145a000 Deferred msi958a.tmp PE 9e80000- 9edb000 Deferred msi889c.tmp PE 9ee0000- 9f0a000 Deferred msi9130.tmp PE 10000000-10593000 Deferred osetup PE 2e000000-2e119000 Deferred setup PE 41110000-41155000 Deferred msi7bd6.tmp PE 504a0000-504c7000 Deferred msi9112.tmp PE 504d0000-504f0000 Deferred msi8b04.tmp ELF 7b800000-7ba44000 Dwarf kernel32<elf> \-PE 7b810000-7ba44000 \ kernel32 ELF 7bab6000-7bb00000 Deferred libdbus-1.so.3 ELF 7bc00000-7bce4000 Dwarf ntdll<elf> \-PE 7bc10000-7bce4000 \ ntdll ELF 7be0f000-7be32000 Deferred localspl<elf> \-PE 7be10000-7be32000 \ localspl ELF 7be32000-7bf00000 Deferred libkrb5.so.3 ELF 7bf00000-7bf04000 Deferred <wine-loader> ELF 7bf09000-7bf25000 Deferred spoolss<elf> \-PE 7bf10000-7bf25000 \ spoolss ELF 7bf25000-7bf3c000 Deferred libresolv.so.2 ELF 7bf3c000-7bf64000 Deferred libk5crypto.so.3 ELF 7bf64000-7bfa1000 Deferred libgssapi_krb5.so.2 ELF 7bfa1000-7c000000 Deferred libcups.so.2 ELF 7c208000-7c2aa000 Deferred msvcrt<elf> \-PE 7c220000-7c2aa000 \ msvcrt ELF 7c2aa000-7c400000 Deferred libxml2.so.2 ELF 7c40c000-7c415000 Deferred librt.so.1 ELF 7c415000-7c427000 Deferred libavahi-client.so.3 ELF 7c427000-7c468000 Deferred winspool<elf> \-PE 7c430000-7c468000 \ winspool ELF 7c468000-7c485000 Deferred libgcc_s.so.1 ELF 7c485000-7c4c2000 Deferred libxslt.so.1 ELF 7c4c2000-7c4e9000 Deferred liblzma.so.5 ELF 7c4e9000-7c59e000 Deferred msxml3<elf> \-PE 7c4f0000-7c59e000 \ msxml3 ELF 7c59e000-7c5cd000 Deferred msxml6<elf> \-PE 7c5a0000-7c5cd000 \ msxml6 ELF 7d0e1000-7d0ea000 Deferred libkrb5support.so.0 ELF 7d0ea000-7d0f8000 Deferred libavahi-common.so.3 ELF 7d5b5000-7d5b9000 Deferred libkeyutils.so.1 ELF 7d5b9000-7d5be000 Deferred libcom_err.so.2 ELF 7d5d6000-7d63e000 Deferred riched20<elf> \-PE 7d5e0000-7d63e000 \ riched20 ELF 7d63e000-7d672000 Deferred hhctrl<elf> \-PE 7d640000-7d672000 \ hhctrl ELF 7d672000-7d696000 Deferred hlink<elf> \-PE 7d680000-7d696000 \ hlink ELF 7d696000-7d6b6000 Deferred oleacc<elf> \-PE 7d6a0000-7d6b6000 \ oleacc ELF 7d6b6000-7d6fa000 Deferred rsaenh<elf> \-PE 7d6c0000-7d6fa000 \ rsaenh ELF 7d6fa000-7d715000 Deferred imagehlp<elf> \-PE 7d700000-7d715000 \ imagehlp ELF 7d72d000-7d764000 Deferred uxtheme<elf> \-PE 7d730000-7d764000 \ uxtheme ELF 7d764000-7d76b000 Deferred libxfixes.so.3 ELF 7d76b000-7d776000 Deferred libxcursor.so.1 ELF 7d7f6000-7d81e000 Deferred libexpat.so.1 ELF 7d81e000-7d857000 Deferred libfontconfig.so.1 ELF 7d857000-7d867000 Deferred libxi.so.6 ELF 7d867000-7d872000 Deferred libxrandr.so.2 ELF 7d872000-7d87c000 Deferred libxrender.so.1 ELF 7d87c000-7d882000 Deferred libxxf86vm.so.1 ELF 7d882000-7d8a6000 Deferred imm32<elf> \-PE 7d890000-7d8a6000 \ imm32 ELF 7d8a6000-7d8ad000 Deferred libxdmcp.so.6 ELF 7d8ad000-7d8cf000 Deferred libxcb.so.1 ELF 7d8cf000-7d8d5000 Deferred libuuid.so.1 ELF 7d8d5000-7d8ef000 Deferred libice.so.6 ELF 7d8ef000-7da26000 Deferred libx11.so.6 ELF 7da26000-7da38000 Deferred libxext.so.6 ELF 7da38000-7da41000 Deferred libsm.so.6 ELF 7da41000-7daf2000 Deferred winex11<elf> \-PE 7da50000-7daf2000 \ winex11 ELF 7daf2000-7db8d000 Deferred libfreetype.so.6 ELF 7dba5000-7dbb9000 Deferred libp11-kit.so.0 ELF 7dbb9000-7dbcb000 Deferred libtasn1.so.3 ELF 7dbcb000-7dc4f000 Deferred libgcrypt.so.11 ELF 7dc4f000-7dd14000 Deferred libgnutls.so.26 ELF 7dd14000-7dd38000 Deferred cabinet<elf> \-PE 7dd20000-7dd38000 \ cabinet ELF 7dd38000-7dd61000 Deferred mpr<elf> \-PE 7dd40000-7dd61000 \ mpr ELF 7dd61000-7dd7a000 Deferred libz.so.1 ELF 7dd7b000-7dd7f000 Deferred libxcomposite.so.1 ELF 7dd7f000-7dd92000 Deferred gnome-keyring-pkcs11.so ELF 7dd92000-7de0c000 Deferred wininet<elf> \-PE 7dda0000-7de0c000 \ wininet ELF 7de0c000-7deb9000 Deferred urlmon<elf> \-PE 7de20000-7deb9000 \ urlmon ELF 7deb9000-7dfdb000 Dwarf msi<elf> \-PE 7dec0000-7dfdb000 \ msi ELF 7dfdb000-7e04b000 Deferred dbghelp<elf> \-PE 7dfe0000-7e04b000 \ dbghelp ELF 7e04b000-7e121000 Deferred crypt32<elf> \-PE 7e050000-7e121000 \ crypt32 ELF 7e121000-7e15b000 Deferred wintrust<elf> \-PE 7e130000-7e15b000 \ wintrust ELF 7e15b000-7e27a000 Deferred comctl32<elf> \-PE 7e160000-7e27a000 \ comctl32 ELF 7e27a000-7e2f0000 Deferred shlwapi<elf> \-PE 7e290000-7e2f0000 \ shlwapi ELF 7e2f0000-7e52e000 Deferred shell32<elf> \-PE 7e300000-7e52e000 \ shell32 ELF 7e52e000-7e673000 Deferred oleaut32<elf> \-PE 7e540000-7e673000 \ oleaut32 ELF 7e673000-7e754000 Deferred gdi32<elf> \-PE 7e680000-7e754000 \ gdi32 ELF 7e754000-7e8c4000 Deferred user32<elf> \-PE 7e770000-7e8c4000 \ user32 ELF 7e8c4000-7ea26000 Deferred ole32<elf> \-PE 7e8e0000-7ea26000 \ ole32 ELF 7ea26000-7eab0000 Deferred rpcrt4<elf> \-PE 7ea30000-7eab0000 \ rpcrt4 ELF 7eab0000-7eae4000 Deferred ws2_32<elf> \-PE 7eac0000-7eae4000 \ ws2_32 ELF 7eae4000-7eb56000 Deferred advapi32<elf> \-PE 7eaf0000-7eb56000 \ advapi32 ELF 7eb56000-7eb7b000 Deferred iphlpapi<elf> \-PE 7eb60000-7eb7b000 \ iphlpapi ELF 7eb7b000-7ebaa000 Deferred netapi32<elf> \-PE 7eb80000-7ebaa000 \ netapi32 ELF 7ebaa000-7ebdf000 Deferred secur32<elf> \-PE 7ebb0000-7ebdf000 \ secur32 ELF 7ebdf000-7ebfa000 Deferred version<elf> \-PE 7ebe0000-7ebfa000 \ version ELF 7ebfa000-7ec07000 Deferred libnss_files.so.2 ELF 7ec07000-7ec13000 Deferred libnss_nis.so.2 ELF 7ec13000-7ec2c000 Deferred libnsl.so.1 ELF 7ec2c000-7ec35000 Deferred libnss_compat.so.2 ELF 7efa5000-7efe8000 Deferred libm.so.6 ELF 7efe8000-7efec000 Deferred libxinerama.so.1 ELF 7efec000-7f000000 Deferred psapi<elf> \-PE 7eff0000-7f000000 \ psapi ELF f7401000-f7405000 Deferred libxau.so.6 ELF f7406000-f740b000 Deferred libdl.so.2 ELF f740b000-f75be000 Dwarf libc.so.6 ELF f75bf000-f75da000 Dwarf libpthread.so.0 ELF f75da000-f75df000 Deferred libgpg-error.so.0 ELF f75f2000-f7736000 Dwarf libwine.so.1 ELF f7738000-f775a000 Deferred ld-linux.so.2 ELF f775a000-f775b000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 0000000e services.exe 0000005b 0 0000005c 0 00000059 0 0000002e 0 0000001f 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001d 0 0000001a 0 00000014 0 00000013 0 0000001b plugplay.exe 00000021 0 0000001e 0 0000001c 0 00000022 explorer.exe 00000023 0 0000002a (D) C:\users\birendra\Desktop\OFFICE 2010\setup.exe 0000005d 0 <== 0000002f 0 0000002b 0 00000042 OSE.EXE 00000045 0 00000047 0 0000002d 0 00000036 0 00000040 0 00000017 0 00000018 0 00000034 0 System information: Wine build: wine-1.4.1 Platform: i386 (WOW64) Host system: Linux Host version: 3.8.0-19-generic Anybody give me suggestion how to fix the problem to install it.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >