Search Results

Search found 2570 results on 103 pages for 'alek sys'.

Page 28/103 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Problem with deploying django application on mod_wsgi

    - by Shehzad009
    Hello, I seem to have a problem deploying django with mod_wsgi. In the past I've used mod_python but I want to make the change. I have been using Graham Dumpleton notes here http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango1, but it still seem to not work. I get a Internal Server Error. django.wsgi file: import os import sys sys.path.append('/var/www/html') sys.path.append('/var/www/html/c2duo_crm') os.environ['DJANGO_SETTINGS_MODULE'] = 'c2duo_crm.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() WSGIScriptAlias / /var/www/html/c2duo_crm/apache/django.wsgi Apache httpd file: <Directory /var/www/html/c2duo_crm/apache> Order allow,deny Allow from all </Directory> In my apache error log, it says I have this error This is not all of it, but I've got the most important part: [Errno 13] Permission denied: '/.python-eggs' [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] The Python egg cache directory is currently set to: [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] /.python-eggs [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] Perhaps your account does not have write access to this directory? You can [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] change the cache directory by setting the PYTHON_EGG_CACHE environment [Thu Mar 03 14:59:25 2011] [error] [client 127.0.0.1] variable to point to an accessible directory.

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • How to debug node.js applications

    - by Fabian Jakobs
    How do I debug a node.js server application? Right now I'm mostly using alert debugging with print statements like this: sys.puts(sys.inspect(someVariable)); There must be a better way to debug. I know that google Chrome has a command line debugger. Is this debugger available for node.js as well?

    Read the article

  • Python OSError not reporting errors

    - by breathe
    Ive got this snippet that Im using to convert image files to tiff. I want to be informed when a file fails to convert. Imagemagick exits 0 when successfully run, so I figured the following snippet would report the issue. However no errors are being reported at all. def image(filePath,dirPath,fileUUID,shortFile): try: os.system("convert " + filePath + " +compress " + dirPath + "/" + shortFile + ".tif") except OSError, e: print sys.stderr, "image conversion failed: %s" % (e.errno, e.strerror) sys.exit(-1)

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • Why is mod_wsgi not able to write data? IOError: failed to write data

    - by BryanWheelock
    What could be causing this error: $ sudo tail -n 100 /var/log/apache2/error.log' [Wed Dec 29 15:20:03 2010] [error] [client 220.181.108.181] mod_wsgi (pid=20343): Exception occurred processing WSGI script '/home/username/public_html/idm.wsgi'. [Wed Dec 29 15:20:03 2010] [error] [client 220.181.108.181] IOError: failed to write data Here is the WSGI script: $ cat public_html/idm.wsgi import os import sys sys.path.append('/home/username/public_html/IDM_app/') os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Why would Django not be able to write data? I'm running Django 1.2.4

    Read the article

  • gcc precompiled headers weird behaviour with -c option

    - by pachanga
    Folks, I'm using gcc-4.4.1 on Linux and before trying precompiled headers in a really large project I decided to test them on simple program. They "kinda work" but I'm not happy with results and I'm sure there is something wrong about my setup. First of all, I wrote a simple program(main.cpp) to test if they work at all: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int main() { return 0; } Then I created the precompiled headers file pre.h(in the same directory) as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> ...and compiled it: $ g++ -I. pre.h (pre.h.gch was created) After that I measured compile time with and without precompiled headers: with pch $ time g++ -I. -include pre.h main.cpp real 0m0.128s user 0m0.088s sys 0m0.048s without pch $ time g++ -I. main.cpp real 0m0.838s user 0m0.784s sys 0m0.056s So far so good! Almost 7 times faster, that's impressive! Now let's try something more realistic. All my sources are built with -c option and for some reason I can't make pch play nicely with it. You can reproduce this with the following steps below... I created the test module foo.cpp as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int whatever() { return 0; } Here are the timings of my attempts to build the module foo.cpp with and without pch: with pch $ time g++ -I. -include pre.h -c foo.cpp real 0m0.357s user 0m0.348s sys 0m0.012s without pch $ time g++ -I. -c foo.cpp real 0m0.330s user 0m0.292s sys 0m0.044s That's quite strange, looks like there is no speed up at all!(I ran timings for several times). It turned out precompiled headers were not used at all in this case, I checked it with -H option(output of "g++ -I. -include pre.h -c foo.cpp -H" didn't list pre.h.gch at all). What am I doing wrong?

    Read the article

  • Usb Driver on 64bit Windows

    - by SurDin
    I have a pretty generic 64bit driver based on bulkusb.sys in WDK. It's been working for years with an embedded program, but now it is needed to work on Vista 64. From all the documentation I've tried to look through there doesn't seem to be anything affecting it, except compiling it for the 64bit environment, and yet when I compile it with the AMD64 build environment, I get "driver not intended for this platform" error message when it's trying to open the sys. What could be the solution for this?

    Read the article

  • SQL Server Indexing

    - by durilai
    I am trying to understand what is going on with CREATE INDEX internally. When I create a NONCLUSTERED index it shows as an INSERT in the execution plan as well as when I get the query test. DECLARE @sqltext VARBINARY(128) SELECT @sqltext = sql_handle FROM sys.sysprocesses s WHERE spid = 73 --73 is the process creating the index SELECT TEXT FROM sys.dm_exec_sql_text(@sqltext) GO Show: insert [dbo].[tbl] select * from [dbo].[tbl] option (maxdop 1) This is consistent in the execution plan. Any info is appreciated.

    Read the article

  • Two way binding settings problem.

    - by Jamie
    Hi, I am having a problem using two way binding with a listpicker. I am able to set the value using c# but not in the SelectedItem=".." in xaml. The binding is returning the correct value (and is a value in the listpicker) as i have texted it by assigning the text to a textblock. When the page loads, the binding used on the listpicker causes a System.ArgumentOutOfRangeException The code i am using to set it is: // Update a setting value. If the setting does not exist, add the setting. public bool AddOrUpdateValue(string key, Object value) { bool valueChanged = false; try { // If new value is different, set the new value if (settingsStorage[key] != value) { settingsStorage[key] = value; valueChanged = true; } } catch (KeyNotFoundException) { settingsStorage.Add(key, value); valueChanged = true; } catch (ArgumentException) { settingsStorage.Add(key, value); valueChanged = true; } catch (Exception e) { Console.WriteLine("Exception occured whilst using IsolatedStorageSettings: " + e.ToString()); } return valueChanged; } // Get the current value of the setting, if not found, set the setting to default value. public valueType GetValueOrDefault<valueType>(string key, valueType defaultValue) { valueType value; try { value = (valueType)settingsStorage[key]; } catch (KeyNotFoundException) { value = defaultValue; } catch (ArgumentException) { value = defaultValue; } return value; } public string WeekBeginsSetting { get { return GetValueOrDefault<string>(WeekBeginsSettingKeyName, WeekBeginsSettingDefault); } set { AddOrUpdateValue(WeekBeginsSettingKeyName, value); Save(); } } And in the xaml: <toolkit:ListPicker x:Name="WeekStartDay" Header="Week begins on" SelectedItem="{Binding Source={StaticResource AppSettings}, Path=WeekBeginsSetting, Mode=TwoWay}"> <sys:String>monday</sys:String> <sys:String>sunday</sys:String> </toolkit:ListPicker> The StaticResource AppSettings is a resource from a seperate .cs file. <phone:PhoneApplicationPage.Resources> <local:ApplicationSettings x:Key="AppSettings"></local:ApplicationSettings> </phone:PhoneApplicationPage.Resources> Thanks in advance

    Read the article

  • How to get parameter values for dm_exec_sql_text

    - by Ted Elliott
    I'm running the following statement to see what queries are executing in sql server: select * from sys.dm_exec_requests r cross apply sys.dm_exec_sql_text(r.sql_handle) where r.database_id = DB_ID('<dbname>') The sql text that comes back is parameterized: (@Parm0 int) select * from foo where foo_id = @Parm0 Is there any way to get the values for the parameters that the statement is using? Say by joining to another table perhaps?

    Read the article

  • how to import the parent model on gae-python

    - by zjm1126
    main:. +-a ¦ +-__init__.py ¦ +-aa.py +-b ¦ +-__init__.py ¦ +-bb.py +-cc.py if i am in aa.py , how to import cc.py ? this is my code ,but it is error : from main import cc what should i do . thanks updated in normal python file (not on gae),i can use this code : import os,sys dirname=os.path.dirname path=os.path.join(dirname(dirname(__file__))) sys.path.insert(0,path) import cc print cc.c but on gae , it show error : ImportError: No module named cc

    Read the article

  • how to import a 'zip' file to my .py ..

    - by zjm1126
    when i use http://github.com/joshthecoder/tweepy-examples , i find : import tweepy in the appengine\oauth_example\handlers.py but i can't find a tweepy file or tweepy's 'py' file, except a tweepy.zip file, i don't think this is right,cauz i never import a zip file, i find this in app.py: import sys sys.path.insert(0, 'tweepy.zip') why ? how to import a zip file.. thanks

    Read the article

  • How to flush the input stream in python?

    - by jinxed_coder
    I'm writing a simple alarm utility in Python. #!/usr/bin/python import time import subprocess import sys alarm1 = int(raw_input("How many minutes (alarm1)? ")) while (1): time.sleep(60*alarm1) print "Alarm1" sys.stdout.flush(); doit = raw_input("Continue (Y/N)?[Y]: ") print "Input",doit if doit == 'N' or doit=='n': print "Exiting....." break I want to flush or discard all the key strokes that were entered while the script was sleeping and only accept the key strokes after the raw_input() is executed.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >