Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 156/238 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • xorg-edgers PPA with llvmpipe breaks AMD APU system

    - by linux_RRT
    I've read before where this had happened to another user with the same system... I was running Ubuntu 12.04 LTS with Kernel 3.2.0.29 and decided to give xorg-edgers a try. I purged fglrx* and xorg* before beginning. I upgraded with sudo apt-get upgrade -d The system downloaded and installed 109MB worth of data to the system, including llvmpipe which I am very unfamiliar with, and Kernel 3.5.0.11. The system was then rebooted to finalize the upgrade. The system boots to a black screen and then tells me "The system is running in low-graphics mode". Did I miss a step in the install? Or do the newest open-source drivers just not work with my hardware? I realize this hardware (APU) is some of the newer development. I dropped to command via the fallback menu and attempted to boot lightdm as root, but the system hangs in 'Configuring kernel parameters' at Starting initializes zram swaping. ...and then it just sits there. The other thing that concerns me is the output at the top of the screen that says: could not write bytes: Broken pipe Does llvmpipe work for this type of system? To be clear the system is: MSI x370-206us Laptop Radeon HD 6320 AMD e450 APU 1.67ghz dual-core Any help would be much appreciated. Like I said, I'm pretty sure I followed the right order of operations for the install procedure, but I was curious if anyone with similar hardware had experienced anything similar.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • There's IP but can't reach gateway

    - by icky
    I have just installed ubuntu 12.04 on my new laptop, and brought it back to home, but I found the wireless network does not work. Strangely, it has the correct ip, but can't connect to the gateway. ifconfig gives ip 192.168.64.36, with broadcast 192.168.79.255 and mask 255.255.240.0, this are all correct, the gateway is at 192.168.64.1 cat /etc/resolv.conf nameserver 192.168.64.1 nameserver 127.0.0.1 which i think it's also right. but when I ping 192.168.64.1, all packages are lost. Please help me with this, I really do not know what happened to my network settings. Huckle, Thank you for your reply ifconfig wlan0 Link encap:Ethernet Hwaddr 88:f9:af:2a:ca:1b inet addr:192.168.64.36 Bcast:192.168.79.255 Mask:255.255.240.0 inet6 addr: fe80::8a9f:faff:fea2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3950 TX byetes:60288 iwconfig wlan0 IEEE 802.11bgn ESSID:"Chiono" Mode:Managed Frequency:2.417 GHz Access Point: 82:54:99:94:6D:43 Bit Rate=13.5 Mb/s Tx-Power=13 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=70/70 Signal Level=-32 dBm Rx invalid nwid:0 Rx invalid crypt:0 RX invalid frag:0 Tx excessive retries: 9 Invalid misc:10 Missed beacon:0 route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.64.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.64.0 * 255.255.240.0 U 2 0 0 0 wlan0 Thank you very much

    Read the article

  • Subdomain still times out after set up a month ago

    - by user8137
    I'm a newbie on this and this has probably been asked already but the subjects online were close but too vague in there answers so I've probably really messed this up. I would really appreciate specific step by step instructions. This is what I'd like to do: use the subdomain www.high-res.domain.com to be accessed by external customers with specific permissions to access the site (like ftp). We use Network Solutions to house domain.com. We recently added a new ip address to point to www.high-res.domain.com. I gave the ip address to the company that hosts our website. I pinged www.high-res.domain.com and it points to the correct ip address but still times out. It’s been a few weeks now and when you ping it, it still times out. C:ping XXX.XXX.X.XXX Pinging XXX.XXX.X.XXX with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for XXX.XXX.X.XXX: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss). Tracert times out as well. I even went to DNS tools and a few other sites for checking this and it shows the same thing. I recently went into the DNSmgmt on our server (wink2k3sp1) and created an A record under the DomainDnsZones which translated to a Cname when you look at it. Under the Domain it has two entries one to the subdomain and the other to the website host each with separate ip addresses. Is this correct? The website people are too busy on another project to research it further and my friends haven't gotten back to me. Please help. Thanks KK

    Read the article

  • Connected to wireless, but no internet access

    - by boogaloo
    After installing Ubuntu 12.04 a week ago wireless internet had been working fine. It stopped working yesterday, however, and I'm at a loss for what to do even after scouring replies to similar posted problems. I have tried using Google's public DNS and turning off proxy settings on Firefox. I have used nm-tool and lshw to make sure my wireless device and driver are connected. If anyone can help me resolve this issue I would be extremely grateful! @kregerjd $ ping -c3 www.google.com ping: unknown host www.google.com @Alaa: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 $ ping -c4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data From 192.168.1.104 icmp_seq=1 Destination Host Unavailable From 192.168.1.104 icmp_seq=2 Destination Host Unavailable --- 192.168.1.1 ping statistics --- 4 packets transmitted, 0 received, +2 errors, 100% packet loss, time 2998ms pipe 4

    Read the article

  • UDF Partition reported full when it is not

    - by Capt.Nemo
    I was using these instructions to setup an external hard disk with udf. I have been able to setup a multi-partition system using those instructions, but I seem to have hit a wall, where the partition is reported as full while writing to the disk. Every other tool available to me reports it as free. Relevant lshw output Here's a screenshot showing the disk: Both the output of df and the file manager (caja) report the disk as free. Filesystem Size Used Avail Use% Mounted on /dev/sda9 9.0G 7.6G 910M 90% / udev 974M 12K 974M 1% /dev /dev/sda1 50G 47G 295M 100% /media/Data /dev/sda6 49G 41G 5.9G 88% /home /dev/sda2 155G 127G 29G 82% /media/Entertainment /dev/sda8 14G 13G 516M 96% /media/Stuff /dev/sdb2 120G 1.9G 112G 2% /media/3c887659-5676-4946-875b-b797be508ce7 /dev/sdb3 11G 2.6G 7.7G 25% /media/108b0a1d-fd1a-4f38-b1c6-4ad1a20e34a3 /dev/sdb1 802G 34G 768G 5% /media/disk I seem to have hit a wall near the 35GB mark. Despite being shown as 35gb/860gb used everywhere, the following happens on a write attempt: [2017][/media/Dory]$ echo D>>echo bash: echo: write error: No space left on device Writing byte by byte, the maximum I can take it to is 34719248K. The most weird part is that on mounting it Windows, Windows can write to the disk easily, and the writes are being read fine back in Ubuntu. However, the used-bytes remains at 34719248K in Ubuntu (It goes higher on Windows, however).

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • 64kb limit on the size of MSMQ Multicast Messages

    - by John Breakwell
    When Windows 2003 came out, Microsoft introduced the ability to broadcast messages to any machines that were listening back. All you had to do was send out a message on a particular port and IP address and any client that had set up a Multicast queue with matching port and IP address would get a copy. Since its introduction, there have been a couple of security vulnerabilities that needed to be removed: Microsoft Security Bulletin MS06-052 Vulnerability in Pragmatic General Multicast (PGM) Could Allow Remote Code Execution (919007) Microsoft Security Bulletin MS08-036 Vulnerabilities in Pragmatic General Multicast (PGM) could allow denial of service (950762) The second of these, MS08-036, was resolved through an undocumented change in functionality. Basically, a limit of 64kb was put on the maximum size of a message that could be broadcast using the Multicast method. Obviously this has caused a few problems for any existing MSMQ Multicast applications that expected to be able to send larger messages. A hotfix has been developed to resolve this problem. 961605 FIX: Multicast messages larger than 64 kilobytes (KB) are not delivered as expected by using Message Queuing 3.0 after security update MS08-036 is installed A registry change is required: Open the registry with Regedit Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RMCAST\Parameters\ Create a DWord called MaxpacketSize Set the value to the desired number of bytes. You can set it to a value between zero and 4MB. If you specify anything above 4MB, it will default to 64K. A reboot is needed after adding this value.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • PHP Calculating Text to Content Ratio

    - by James
    I am using the following code to calculate text to code ratio. I think it is crazy that no one can agree on how to properly calculate the result. I am looking any suggestions or ideas to improve this code that may make it more accurate. <?php // Returns the size of the content in bytes function findKb($content){ $count=0; $order = array("\r\n", "\n", "\r", "chr(13)", "\t", "\0", "\x0B"); $content = str_replace($order, "12", $content); for ($index = 0; $index < strlen($content); $index ++){ $byte = ord($content[$index]); if ($byte <= 127) { $count++; } else if ($byte >= 194 && $byte <= 223) { $count=$count+2; } else if ($byte >= 224 && $byte <= 239) { $count=$count+3; } else if ($byte >= 240 && $byte <= 244) { $count=$count+4; } } return $count; } // Collect size of entire code $filesize = findKb($content); // Remove anything within script tags $code = preg_replace("@<script[^>]*>.+</script[^>]*>@i", "", $content); // Remove anything within style tags $code = preg_replace("@<style[^>]*>.+</style[^>]*>@i", "", $content); // Remove all tags from the system $code = strip_tags($code); // Remove Extra whitespace from the content $code = preg_replace( '/\s+/', ' ', $code ); // Find the size of the remaining code $codesize = findKb($code); // Calculate Percentage $percent = $codesize/$filesize; $percentage = $percent*100; echo $percentage; ?> I don't know the exact calculations that are used so this function is just my guess. Does anyone know what the proper calculations are or if my functions are close enough for a good judgement.

    Read the article

  • Computer becomes unreachable on lan after some time

    - by Ashfame
    I work on my laptop and ssh into my desktop. I use a lot of key based authentication for many servers for work but recently I couldn't login because ssh would pick up and try all the keys and it stops trying before ultimately falling back to password based login. So right now I am using this command: ssh -X -o PubkeyAuthentication=no [email protected] #deskto The issue is after sometime the desktop would just become unreachable from laptop. I won't be able to open its localhost through IP and today I tried ping'in it and found a weird thing. Instead of 192.168.1.4, it tries to ping 192.168.1.3 which I am sure is the root cause as it just can't reach 192.168.1.4 when its actually trying for 192.168.1.3 Ping command output: ashfame@ashfame-xps:~$ ping 192.168.1.4 PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data. From 192.168.1.3 icmp_seq=1 Destination Host Unreachable From 192.168.1.3 icmp_seq=2 Destination Host Unreachable From 192.168.1.3 icmp_seq=3 Destination Host Unreachable From 192.168.1.3 icmp_seq=4 Destination Host Unreachable From 192.168.1.3 icmp_seq=5 Destination Host Unreachable From 192.168.1.3 icmp_seq=6 Destination Host Unreachable From 192.168.1.3 icmp_seq=7 Destination Host Unreachable From 192.168.1.3 icmp_seq=8 Destination Host Unreachable From 192.168.1.3 icmp_seq=9 Destination Host Unreachable ^C --- 192.168.1.4 ping statistics --- 10 packets transmitted, 0 received, +9 errors, 100% packet loss, time 9047ms pipe 3 Also the ping command message comes in multiple and not one by one. (izx answer's the weirdness I thought there was in ping command.) I did check for desktop, its local IP is still the same, so something is going on in my laptop. Any ideas? P.S. - Laptop runs Ubuntu 12.04 & Desktop runs Ubuntu 11.10 Laptop is connected through wifi to router and Desktop is connected through LAN to router. Update: Even after setting up static IP leases in router settings, I again ran into this issue.

    Read the article

  • Is my gedit-latex-plugin working properly?

    - by arroy_0209
    I have installed gedit-latex-plugin(0.2 rc3) to be used with gedit(2.30.3) in ubuntu 10.04. If I use the command gedit file.tex& in terminal the file is opened and it seems everything works fine but in the terminal, lots of comments appear, some of which are: 2012-03-31 22:14:27,263 DEBUG resources - Initializing resource locating 2012-03-31 22:14:27,361 DEBUG Preferences - not found 2012-03-31 22:14:27,373 DEBUG JobManager - Created JobManager instance 147209196 2012-03-31 22:14:27,379 DEBUG GeditLaTeXPlugin - activate 2012-03-31 22:14:27,379 DEBUG WindowContext - init 2012-03-31 22:14:27,444 DEBUG GeditWindowDecorator - _init_tab_decorators: initialized 0 decorators 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - active_tab_changed 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - ---------- ADJUST: None 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - No window-scope views for this extension 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:27,514 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 2012-03-31 22:14:27,539 DEBUG GeditWindowDecorator - tab_added 2012-03-31 22:14:27,952 DEBUG GeditTabDecorator - loaded 2012-03-31 22:14:27,964 DEBUG GeditTabDecorator - _adjust_editor: URI has changed 2012-03-31 22:14:27,965 DEBUG LaTeXCompletionHandler - init 2012-03-31 22:14:27,966 DEBUG LanguageModelFactory - Pickled object found: /home/abcd/.gnome2/gedit/plugins/GeditLaTeXPlugin/latex.pkl 2012-03-31 22:14:28,075 DEBUG CompletionDistributor - init 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view LaTeXOutlineView 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view IssueView 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - init(file:///home/abcd/dir1/file1.tex) 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - Parsing document... 2012-03-31 22:14:28,080 DEBUG IssueView - init 2012-03-31 22:14:28,082 DEBUG IssueView - init finished 2012-03-31 22:14:28,092 INFO LaTeXEditor - LaTeXParser.parse: 0.010000 2012-03-31 22:14:28,092 DEBUG LaTeXEditor - Parsed 1599 bytes of content 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - set_outline 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - init 2012-03-31 22:14:28,097 DEBUG LaTeXValidator - validate 2012-03-31 22:14:28,098 DEBUG LanguageModel - set_newcommands: 2012-03-31 22:14:28,102 DEBUG LaTeXEditor - Parsing finished 2012-03-31 22:14:28,105 DEBUG GeditWindowDecorator - ---------- ADJUST: .tex 2012-03-31 22:14:28,119 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:28,120 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 I am not sure if the gedit-latex-plugin is working properly or is it facing some problem. Why are there so many debug messages? Can anybody please suggest what I should do?

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • "Misaligned partition" - Should I do repartition (how?)

    - by RndmUbuntuAmateur
    Tried to install Ubuntu 12.04 from USB-stick alongside the existing Win7 OS 64bit, and now I'm not sure if install was completely successful: Disk Utility tool claims that the Extended partition (which contains Ubuntu partition and Swap) is "misaligned" and recommends repartition. What should I do, and if should I do this repartition, how to do it (especially if I would like not to lose the data on Win7 partition)? Background info: A considerably new Thinkpad laptop (UEFI BIOS, if that matters). Before install there were already a "SYSTEM_DRV" partition, the main Windows partition and a Lenovo recovery partition (all NTFS). Now the table looks like this: SYSTEM_DRV (sda1), Windows (sda2), Extended (sda4) (which contains Linux (sda5; ext4) and Swap (sda6)) and Recovery (sda3). Disk Utility Tool gives a message as follows when I select Ext: "The partition is misaligned by 1024 bytes. This may result in very poor performance. Repartitioning is suggested." There were couple of problems during the install, which I describe below, in the case they happen to be relevant. Installer claimed that it recognized existing OS'es fine, so I checked the corresponding option during the install. Next, when it asked me how to allocate the disk space, the first weird thing happened: the installer give me a graphical "slide" allocate disk space for pre-existing Win7 OS and new Ubuntu... but it did not inform me which partition would be for Ubuntu and which for Windows. ..well, I decided to go with the setting installer proposed. (not sure if this is relevant, but I guess I'd better mention it anyway - the previous partition tools have been more self-explanatory...) After the install (which reported no errors), GRUB/Ubuntu refused to boot. Luckily this problem was quite straightforwardly resolved with live-Ubuntu-USB and Boot-Repair ("Recommended repair" worked just fine). After all this hassle I decided to check the partition table "just to be sure"- and the disk utility gives the warning message I described.

    Read the article

  • Remember me or not?

    - by taeja87
    I was told to post this on webmasters instead of stackoverflow. Is it safe to have the remember me feature? Would it be somewhat safe (knowing it won't be 100% safe) to allow users to close their browser and come back still logged in? I am not exacting sure which way I should go after reading different things about safety. I learned about session fixation and implemented security to add more protection. From experience, if remember me is checked then only your username/email appears and requires you to re-enter your password. Other sites allow you to come in and out as much as you way without logging out after the browser has closed. If it is safe, what is the current best way of implementing remember/stay logged in? http://stackoverflow.com/questions/3531377/best-practise-for-remember-me-feature http://stackoverflow.com/questions/5087969/what-is-the-code-for-stay-logged-in-or-remember-me-while-user-login-in-php http://bytes.com/topic/php/answers/881197-stay-logged-remember-me-php-sessions-cookies http://security.stackexchange.com/questions/41/good-session-practices Also: The site I am working on is email & password login type.

    Read the article

  • Bug: unable to handle kernel NULL pointer dereference at

    - by maria
    I have recently installed new system on my disc, Ubuntu 12.04. Installation proceeded without problems, I started installing additional software and put data from other discs. I had already two times bug report, it was quite long, and I have no idea how to acces to log file (which probably is somewhere saved) and since I had to switch off the computer using the button, anything else was possible, here is just a small part of it (what I've noted on paper) could not write bytes: Broken pipe speach dicpatcher disabled: edit etc/default/speach-dispatcher saned disabled: edit .... and than: BUG: unable to handle kernel NULL pointer dereference at 0000009c I've run Memory test in GRUB, everything is fine. First time it occured when I was using rsync, second time when I was trying to install texlive. Should I install whole system once again? Or can it be hardware problem? Or something else? If there is any hardware details which may be relevant, please ask, since I have no idea what is happening, I don't know what kind of information could be useful. Thanks P.S. dmesg output:

    Read the article

  • http request terminating early

    - by spiderplant0
    I noticed that on some of my sites, images were occasionally not getting downloaded fully. After a bit of investigation it appears that it is not restricted to images - .css, .js etc were also occasionally terminating early. The faults appear to be random. When I use the debug/proxy tool Fiddler2 reports that fewer bytes have been received than were requested. Firebug reports "Image corrupt or truncated". Obviously this is mainly a concern between me and my hosting company. However despite many emails they have not been able to get to the bottom of it. Transfer to another hosting company is obviously an option but is really a last resort. Has anyone seen this kind of thing before or can anyone suggest what might be causing it. Or any apache setting or something that I can ask them to check out. Will apache log this kind of error - they havent been able to provide me with any logs, but if I know exactly where things have been logged, maybe I can prompt them in to action.

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Static vs. dynamic memory allocation - lots of constant objects, only small part of them used at runtime

    - by k29
    Here are two options: Option 1: enum QuizCategory { CATEGORY_1(new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...), CATEGORY_2(new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...), ... ; public MyCollection<Question> collection; private QuizCategory(MyCollection<Question> collection) { this.collection = collection; } public Question getRandom() { return collection.getRandomQuestion(); } } Option 2: enum QuizCategory2 { CATEGORY_1 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...; } }, CATEGORY_2 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...; } }; public Question getRandom() { MyCollection<Question> collection = populateWithQuestions(); return collection.getRandomQuestion(); } protected abstract MyCollection<Question> populateWithQuestions(); } There will be around 1000 categories, each containing 10 - 300 questions (100 on average). At runtime typically only 10 categories and 30 questions will be used. Each question is itself an enum constant (with its fields and methods). I'm trying to decide between those two options in the mobile application context. I haven't done any measurements since I have yet to write the questions and would like to gather more information before committing to one or another option. As far as I understand: (a) Option 1 will perform better since there will be no need to populate the collection and then garbage-collect the questions; (b) Option 1 will require extra memory: 1000 categories x 100 questions x 4 bytes for each reference = 400 Kb, which is not significant. So I'm leaning to Option 1, but just wondered if I'm correct in my assumptions and not missing something important? Perhaps someone has faced a similar dilemma? Or perhaps it doesn't actually matter that much?

    Read the article

  • Growing your VirtualBox Virtual Disk

    - by Fat Bloke
    Don't you just hate it when this happens: Fortunately, if you're running inside VirtualBox, you can resize your virtual disk and magically make your guest have a bigger disk very easily. There are 2 steps to doing this... 1. Resize the virtual disk Use the VBoxManage command line tool to extend the size of the Virtual Disk, specifying the path to the disk and the size in MB: VBoxManage modifyhd <uuid>|<filename> [--type normal|writethrough|immutable|shareable| readonly|multiattach] [--autoreset on|off] [--compact] [--resize <megabytes>|--resizebyte <bytes>]   If you booted up your guest at this point, the extra space is seen as an unformatted area on the disk, like this: So we now need to tell the guest about the extra space available. 2. Extend the guest's partition to use the extra space How you do this step depends on you guest OS type and the tools you have available. Linux guests often include the excellent gparted partition editor, whereas Windows 7 and 8 provide the Computer Management tool which can resize partitions. Unfortunately, my Windows XP vm has no such tool. But I do have a couple of other options: Most Linux installable .isos include the aforementioned gparted tool, so I could simply attach, say, an Ubuntu.iso as a Virtual CD/DVD in my Windows XP vm and boot off that. Then use gparted to extend the Windows XP partition, before finally rebooting. But I took another route and attached my resized virtual disk to a Windows Server 2012 vm I had lying around. Then I used the Computer Management tool in Windows Server 2012 to extend the partition of the Windows XP disk, before shutting down, unplugging the disk and reattaching to my Windows XP vm. (Note that if your vm's use different disk controllers, Windows will check the disks on booting). When I finally boot up my Windows XP guest I see the available disk space and all is well. At least until the next time - FB 

    Read the article

  • What does the Sys_PageIn() function do in Quake?

    - by Philip
    I've noticed in the initialization process of the original Quake the following function is called. volatile int sys_checksum; // **lots of code** void Sys_PageIn(void *ptr, int size) { byte *x; int j,m,n; //touch all memory to make sure its there. The 16-page skip is to //keep Win 95 from thinking we're trying to page ourselves in (we are //doing that, of course, but there's no reason we shouldn't) x = (byte *)ptr; for (n=0 ; n<4 ; n++) { for (m=0; m<(size - 16 * 0x1000) ; m += 4) { sys_checksum += *(int *)&x[m]; sys_checksum += *(int *)&x[m + 16 * 0x10000]; } } } I think I'm just not familiar enough with paging to understand this function. the void* ptr passed to the function is a recently malloc()'d piece of memory that is size bytes big. This is the whole function - j is an unreferenced variable. My best guess is that the volatile int sys_checksum is forcing the system to physically read all of the space that was just malloc()'d, perhaps to ensure that these spaces exist in virtual memory? Is this right? And why would someone do this? Is it for some antiquated Win95 reason?

    Read the article

  • ???: Oracle NoSQL Database??

    - by zhangqm
    ?????????Oracle?????Oracle NoSQL Database,?????NoSQL Database ??????????Oracle NoSQL Database??2???,Community Edition ?Enterprise Edition?????????NoSQL Database 11g R2 (11gR2.1.2.123). ?????????????????: Oracle NoSQL Database OTN portal (includes download facility) Oracle NoSQL Database OTN documentation Oracle NoSQL Database license information ??Oracle NoSQL Database ???????????,????,?????(key-value)???TB????,????????????(???)????,??????????????????????????,????,??????????? ?Oracle NoSQL Database?,???????????key-value???,??key???????:??????????key?(?????string),????????(??????????bytes)??????key-value ??primary key?hash?,????????????????????????????????,???????,????????????????????????? ???????????????????Java API??????Oracle NoSQL Database driver ????????,?????key-value????????????????Oracle NoSQL Database ?????Create, Read, Update and Delete (CRUD)??,???????durability??????????????????????:?????web console???command line??? Oracle Berkeley DB Java Edition Oracle NoSQL Database?? Oracle Berkeley DB Java Edition ????????,??????????????????????????????,?????????????????? ????????????Oracle NoSQL Database Driver?????key-value????????????Oracle NoSQL Database Driver??:?????????hash??????????????????,?????????????????????? ????????Oracle NoSQL Database Oracle NoSQL Database????????????????????????????????????????????????????????????????: ???? ???? ???? ?????? ???? ?????? ????,??,?? ???? ???? ??? (sub-millisecond) ???????? ????? ??????? ????????  ?????Oracle?????? ???? (Oracle Big Data Appliance) ???? ?????????????????????????????????,???“??”???????????,Oracle NoSQL Database???????????Oracle NoSQL Database?????(Cloud)??,????????(TB?PB??)???Oracle NoSQL Database ??????ETL??(??MapReduce, Hadoop)??,??acquire-organize-analyze ?????????? ???????Oracle NoSQL Database?????: • Large schema-less data repositories• Web?? (click-through capture)• ????• ????• ?????????? • Sensor/statistics/network capture (?????, ?????)• ?????????• ???? (MMS, SMS, routing)• ???? Oracle NoSQL Database (Community Edition ??)??????????? Oracle Big Data Appliance???

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >