Search Results

Search found 9128 results on 366 pages for 'big theta'.

Page 48/366 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • How best to present a security vulnerability to a web development team in your own company?

    - by BigCoEmployee
    Imagine the following scenario: You work at Big Co. and your coworkers down the hall are on the web development team for Big Co's public blog system, which a lot of Big Co employees and some public people use. The blog system allows any HTML and JavaScript, and you've been told that it was a choice (not by accident) but you aren't sure if they realize the implications of this. So you want to convince them that this is a bad idea. You write some demonstration code and plant a XSS script in your own blog, and then write some blog posts. Soon after, the head blog admin (down the hall) visits your blog post and the XSS sends his cookies to you. You copy them into your browser and you are now logged in as him. Okay, now you're logged in as him... And you start realizing that it maybe wasn't such a good idea to go ahead and 'hack' the blog system. But you are a good guy! You don't touch his account after logging into it, and you definitely don't plan on publicizing this weakness; you just maybe want to show them that the public is able to do this, so that they can fix it before someone malicious realizes the same thing! What is the best course of action from here?

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

  • How do I split ONE array to two separate arrays based on magnitude size and a threshold?

    - by youhaveaBigego
    I have an array which has BIG numbers and small numbers in it. I got it from after running a log from WireShark. It is the total number of Bytes of TCP traffic. But Wireshark does not discriminate(it would actually try, and hence it will tell you the traffic stats of ALL types of traffic, but since This is how the Array look like : @Array=qw(10912980 10924534 10913356 10910304 10920426 10900658 10911266 10912088 10928972 10914718 10920770 10897774 10934258 10882186 10874126 8531 8217 3876 8147 8019 68157 3432 3350 3338 3280 3280 7845 7869 3072 3002 2828 8397 1328 1280 1240 1194 1193 1192 1194 6440 1148 1218 4236 1161 1100 1102 1148 1172 6305 1010 5437 3534 4623 4669 3617 4234 959 1121 1121 1075 3122 3076 1020 3030 628 2938 2938 1611 1611 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 583 370 178) When you look at these this array carefully, one thing is obvious to the human eye. There are really BIG numbers and small numbers. (Basically what I am saying is, there is the 1% class and low income class, no middle class). I want to split the array to two different arrays. That would require me to set a threshold. Array 1 should be ONLY the BIG numbers (10924534-10874126), and array 2 should be the smaller numbers (68157-178). Btw, the array is not sorted. User will NOT input the threshold, and hence should be determined smartly.

    Read the article

  • OS choice between: Debian, gNewSense, and OpenSolaris

    - by penyuan
    I am planning to migrate from Mac OS X and Windows to either a Unix or Linux distribution, i.e. I am a Linux/Unix beginner. Right now the following caught my interest: Debian: Well established with huge repository of 20000+ apps. gNewSence: "Totally free" version of Ubuntu, so it should be more beginner friendly? OpenSolaris: Also open-source, and built on "strong" Unix base. I do mainly basic tasks such as web browsing, office work, maintaining big photo collection, and a little bit of programming. Questions: How "free" are each of these distributions compared to each other, is this whole freedom thing a big deal? Will a binary labeled as for Ubuntu work on gNewSense? What are simple IDEs for Debian and gNewSense?

    Read the article

  • Is my PC Good enough [closed]

    - by Moinak Nath
    I'm getting a new laptop this Christmas and I was wondering if it's good enough for what I do. I'll be playing games like Need for Speed: Most Wanted (2012) and other NFS games. Also silent hunter and flight sim. I also browse the internet download stuff like, watch movies occasionally type documents with word, edit videos, and transfer files. To be more specific is the hdd big enough? is the ram big enough? Is the graphics card good? is the cpu speed enough, and is Windows windows 8 good for all these things. i also video chat so these are the specs 2.2 Ghz Intel Pentium B960 Dual Core 4 GB RAM 320 GB HDD Intel HD Graphics 720p Webcam 4 USB Ports (2 USB 3.0 @ 2 USB 2.0) HDMI Port It Is a Lenovo IdeaPad This is the one im Looking at http://www.bestbuy.com/site/Lenovo+-+IdeaPad+15.6%26%2334%3B+Laptop+-+4GB+Memory+-+320GB+Hard+Drive+-+Black/6851264.p?id=1218809260330&skuId=6851264#tab=specifications

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • Should I install Windows Management Framework 3.0?

    - by Massimo
    I'm posting this as a BIG CAVEAT to everyone. I know it's not a standard Q&A, but I think this is someone every Windows admin should know. There is a very real risk of falling into Big Troubles. Microsoft has recently released Windows Management Framework 3.0 for Windows Server 2008 and Windows Server 2008 R2 systems, which includes some nice things native to Windows Server 2012 (like PowerShell 3.0) and lots of improvements to WMI, WinRM and other management technologies. Windows Update is advertising it as an optional update. Should I install it on my servers?

    Read the article

  • Flexible classroom environments (OS, Office)

    - by HannesFostie
    I work in the IT department of a training center, we still offer XP and Office 2003 trainings but also offer Vista and Win7 and Office 2007. Currently, we use VMs on VMware Server but this is obviously not a superb choice. We're thinking of implementing something like VDI (brainstorm phase, we hardly have any details) but I decided to check here if people would have some clever alternatives. Requirements: * Flexible when it comes to deployment * Centralized management would be a big plus * Allow for different software, whether they be compatible or not (all of office except for outlook can be installed simultaneously. for outlook you need to choose between 2003 or 2007) * Allow for different OS We have a big enough budget to implement a proper SAN environment to accomodate the virtualization of the solution, whatever kind it may be. A support contract will probably be necessary as well, because we need to be able to offer quick solutions to problems and with only 2 sysadmins that is simply impossible to guarantee.

    Read the article

  • After installing Win7 on my laptop I'm having trouble extending the desktop to an external monitor

    - by devoured elysium
    I have a HP TX2000 laptop and a HP w2408 screen. Yesterday, I installed Win7 and I'm having trouble having the 24 inches screen work as a secondary screen. It seems like my laptop cannot detect both screens (its own and the 24 inches one). I think I already have all the drivers installed (I ran Win7's tool to detect and automatically update drivers and it said everything was up to date!), so what might be the problem? If I connect the 24 inch screen to my laptop, it will happily show a copy of what is being shown on that big screen too, but I'd like to have it ''extend'' the desktop to the big screen, instead.

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

  • size of extent on LVM2

    - by piotrek
    in LVM1 there was a limit of 65k extends. So size of extent had to been chosen carefully between wasted space on partitions (to big extent) and maximal possible size of logical volume (too small extent). in lvm2 (according to http://docstore.mik.ua/manuals/hp-ux/en/5992-4589/apa.html) the limit is ~16 million extents. so the default size of 4mb gives ~60TB of LV size. so is there any point in making the extent larger than 4-16mb on a desktop? is there any performance degradation or other costs of having big number of extents?

    Read the article

  • Recommended boot partition size for Windows 7

    - by dwj
    I started using One Big Partition for everything and separating data out with folders when I got my current computer years ago. I'm preparing to upgrade my system from Windows XP to Windows 7 and I thought I might go back to putting my data on a separate partition. Most likely I'll just use the default OS install. My current Program Files tree has ~16 GB of stuff. Thinking ahead though, I've had XP installed for years. Who knows what apps I'm going to install down the line? This, of course, begs the question: How big do I make my Windows 7 install partition?

    Read the article

  • Looking for a Software to harden Windows machines

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So IF anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • How to clone and deploy multiple machines?

    - by Mimi
    Hi, See what I wanna do is basically make bootable clones or disc images or whatever(let's call it "stuff" for now ) and store many of them in a single big HDD. Then, I wanna use that big HDD and the "stuff" in it to be able to deploy new machines without the hassle of reinstalling Windows and whatnot(i.e. I do not want to restore nor simply clone a same machine, I literally want to use clones to save time during machine set up). Obviously I'd deploy the "stuff" onto an HDD going into a machine that I know the hardware of to avoid driver issues and whatnot. I alos have enough Windows keys to deal with the multiple activations. I tried using Acronis True Image but it doesn't seem to do what I want(unless idk how to use it properly). Any advice is welcome thanks :)

    Read the article

  • What's the best solution for file sharing in my case? DAS or NAS?

    - by jakub
    I want to have in my network small, cheap and energy efficient server with will be fully customizable (Gnu/Linux, OpenBSD). What is more I want to have big, redundant storage in my network and access to it via server. I have already small terminal without hard drive (no SATA/PATA, one drive on USB) which works fine. I don't want to buy big server, or to use regular computer for that. It's not cheap. I thought about a small case (ITX?), and cheap computer in this with SATA ports, but I cannot find anything interesting :( I thought about NAS in network and server independently and booting server from NAS, I'm not sure which technologies will be good for that, and I don't know what with performance. Direct connection to NAS through network from workstation is next pro for that. What do you think about DAS? It will be good for that?

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • How to host a scalable social networking app

    - by christopher-mccann
    I am in the middle of developing a social networking application for a very select user niche which could scale to a few million users. Right now I have always hosted applications on RackSpace Cloud and I have no issues with them at all - always been a really good service and never had any downtime. My question is though does anyone think that cloud computing is not the way to host scalable web apps? Or can anyone with experience of this recommend a better solution. I have always shunned trying to run big servers from my own facilities as I think it seems silly to go to the expense of bringing in big alternative power supplies and all the other necessary precautions when other companies already do this. I looked at managed hosting services but this proved to be a bit too expensive for us at the start and the scalability of it wasnt good enough - it would take a day or two to get a new server provisioned. Therefore I ended up on a cloud platform. If anyone has any recommendations or advice it would be greatly appreciated.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >