Search Results

Search found 9147 results on 366 pages for 'big smile'.

Page 14/366 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • big size of database-log SQL-Server 2008

    - by t.kehl
    I have a database which is running under Microsoft SQL Server 2008. Now, I have seen, that the log of the database (ldf-file) is growing to big size. The database-file (mdf) has a size of 630MB and the log-file has a size of 12GB. I ask me now, what the reason for this can be. Is there a tool which let me seeing into the log where I can see, what is the reason for this big size? What can I do to prevent that the log is growing to this big size?

    Read the article

  • big speed difference on a network link with and without VPN tunnel

    - by xirtyllo
    Scenario: We have a network link between two offices. The link is provided by a third party company through a VLAN on their network, but to us it is totally transparent -as if we had a simple ethernet cable going from one location to the other-. We have one router at each side of the link, with 3 VPN tunnels in between the two. The test: When I test the speed of the network link with the routers in place, with one laptop directly connected to the router on each side, I consistently get ~30/35Mbps. But if I take out the routers and I test the link connecting the laptops directly to the ethernet cable at each side, I consistently get ~85/88Mbps. It's quite a big performance hit, and I would tend to think that the VPN tunnels are responsible for the slow down. Is it normal that this configuration (two routers with three VPN tunnels between them) takes away so much bandwidth? More info: The encryption algorithm used for the VPN tunnels is AES128. The routers model is Zyxel USG200 and Zyxel USG1000, and their CPU, memory, and storage use is well within normal limits. The nominal bandwidth of the network link is 100Mbps. The network link in question is supplied by a third party company (the building in between our two offices). Basically it passes through their network as a VLAN, but the VLAN is completely transparent to us (e.g. no configuration required on our side, just like one single cable from end to end). Unfortunately (or maybe fortunately) I cannot directly test different routers configurations as I'm not the person in charge of it.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • Window too big to fit the screen!

    - by syockit
    I'm using Windows 7 on a 8.9' monitor with 1280x768 screen resolution. Using the might of arithmetics, I'm able to determine that my dpi (actually ppi) should be 167. Win7 is really helpful in that it doesn't have to restart to apply new dpi settings, unlike its predecessors (though I'd rather it applies straight away). The problem with small monitors in Windows is that when you come across windows too big to fit the screen, you can't move the title bar far above it. In X window managers I used in the past, you could alt-drag the window to anywhere you want, but in Windows, even if you alt-space and select move, it will automatically push the window back until the title bar is visible. I'm looking for a solution that either: allows me to move window freely without regard to titlebar visibility, or attach a scrollbar to existing window, or EDIT: create virtual desktops that allow me to span windows over 2 desktops, or EDIT 2: allow me to set larger virtual resolution, then pan & scan. EDIT 3: I found some progs that might do some of the above: 1) AltDrag allows me to drag, resize using alt and left/right mouse button. Neat! Best solution so far. 2) GiMeSpace Desktop Extender is supposed to allow me to scroll desktop. Didn't work. The other new version, GiMeSpace Ultimate Taskbar worked, but it destroys my Superbar, replacing it with its map.

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Converting float values from big endian to little endian

    - by Bobby
    Is it possible to convert floats from big to little endian? I have a value from a PowerPC(big endian)platform that I am send via TCP to a Windows process (little endian). This value is a float, but when I "memcpy" the value into a win32 float type and then call _byteswap_ulongon that value, I always get 0.0000? What am I doing wrong?

    Read the article

  • Big and Little endian question

    - by Bobby
    I have the following code: // Incrementer datastores.cmtDatastores.u32Region[0] += 1; // Decrementer datastores.cmtDatastores.u32Region[1] = (datastores.cmtDatastores.u32Region[1] == 0) ? 10 : datastores.cmtDatastores.u32Region[1] - 1; // Toggler datastores.cmtDatastores.u32Region[2] = (datastores.cmtDatastores.u32Region[2] == 0x0000) ? 0xFFFF : 0x0000; The u32Region array is an unsigned int array that is part of a struct. Later in the code I convert this array to Big endian format: unsigned long *swapL = (unsigned long*)&datastores.cmtDatastores.u32Region[50]; for (int i=0;i<50;i++) { swapL[i] = _byteswap_ulong(swapL[i]); } This entire code snippet is part of a loop that repeats indefinitely. It is a contrived program that increments one element, decrements another and toggles a third element. The array is then sent via TCP to another machine that unpacks this data. The first loop works fine. After that, since the data is in big endian format, when I "increment", "decrement", and "toggle", the values are incorrect. Obviously, if in the first loop datastores.cmtDatastores.u32Region[0] += 1; results in 1, the second loop it should be 2, but it's not. It is adding the number 1(little endian) to the number in datastores.cmtDatastores.u32Region[0](big endian). I guess I have to revert back to little endian at the start of every loop, but it appears there should be an easier way to do this. Any thoughts? Thanks, Bobby

    Read the article

  • Big problem with fluent nhibernate, c# and MySQL need to search in BLOB

    - by VinnyG
    I've done a big mistake, now I have to find a solution. It was my first project working with fluent nhibernate, I mapped an object this way : public PosteCandidateMap() { Id(x => x.Id); Map(x => x.Candidate); Map(x => x.Status); Map(x => x.Poste); Map(x => x.MatchPossibility); Map(x => x.ModificationDate); } So the whole Poste object is in the database but I would have need only the PosteId. Now I got to find all Candidates for one Poste so when I look in my repository I have : return GetAll().Where(x => x.Poste.Id == id).ToList(); But this is very slow since it loads all the items, we now have more than 1500 items in the table, at first to project was not supposed to be that big (not a big paycheck either). Now I'm trying to do this with criterion ou Linq but it's not working since my Poste is in a BLOB. Is there anyway I can change this easyly? Thanks a lot for the help!

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • web2py or grok (zope) on a big portal,

    - by Robert
    Hi, I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time). For performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL. My question is: how web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope, How is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql? Could you advice me please.

    Read the article

  • What's the next big thing after LINQ?

    - by Leniel Macaferi
    I started using LINQ (Language Integrated Query) when it was still in beta, more specifically Microsoft .NET LINQ Preview (May 2006). Almost 4 years have passed and here we are using LINQ in a lot of projects for the most diverse tasks. I even wrote my final college project based on LINQ. You see how I like it. LINQ and more recently PLINQ (Parallel LINQ) give our jobs a great boost when it comes to more programming power and less lines of code leading us to more expressive and readable code. I keep thinking what could be the next big language improvement for C# after LINQ. I know there are some promissing language features coming as Code Contracts, etc, but nothing having the impact that LINQ had. What do you think could be the next big thing?

    Read the article

  • how big should your controllers be in asp.net-mvc

    - by ooo
    i see the new feature of areas in asp.net-mvc 2. it got me thinking. why would i need this? i did some reading on the use cases and it came down to a specific point to me around how big and how broad scope should my controllers should be? should i try to have many little controllers? one big controller? how do people determine the sweet spot for number of controllers? i think mine are maybe too large (which had me questioning areas in the first place as maybe my controller name should really be an area and have a number of smaller controllers)

    Read the article

  • New to Rails. Doubt in Big URL Routing

    - by Gautam
    Hi, I have just started learning ruby on rails. I have a doubt wrt routing. Default Routing in Rails is :controller/:action/:id It works really fine for the example lets say example.com/publisher/author/book_name Could you tell me how do you work with something very big like this site http://www.telegraph.co.uk/sport/football/leagues/premierleague/chelsea/ Could you let me understand about the various controllers, actions, ids for the above mentioned url and how to code controller, models so as to achieve this. Could you suggest me some good tutorials when dealing with this big urls. Looking forward for your help Thanks in advance Gautam

    Read the article

  • Perl "Day too big" - root cause

    - by azp74
    I have been helping someone debug some code where the error message was "Day too big". I know that this springs from localtime and the Y2038 bug (most google results appear to be people dealing with cookies expiring well into the future). We appear to have 'fixed' the problem by using time to get the current date. However, given that none of our original dates should have hit the 2038 issue I'm sceptical that we've actually fixed the problem ... Are there other instances that anyone knows of where one would hit "day too big"?

    Read the article

  • Hy, problem mantaining big javascript code.

    - by Totty
    I have more than 1000 lines in a big jquery plugin, that is actually a big class, that inludes some others classes, but they have to be in the same file. I inlcude a piece of code. If you have another way to simplify the code.. The actual problem is that i have a gallery with a lot of things, is dynamic with smart ajax data loading so it requires a lot of classes to use it properly and to cache the data. (function($){ var TottysGallery = function(element, options, data){ var Core = new function(){...}; var Core2 = new function(){...}; var Core3 = new function(){...}; var Core = function(){...}; };

    Read the article

  • How Manage Big Linq DataContext ?

    - by Rev
    Hi The major problem in .net programs is "How manage memory for best performance". so Microsoft use garbage collector in .net and with that, we don't need to do something for managing memory(or better say we can use GC easily) But when you develop big project(business app), you make too many tables and database for your own project. so if you use Linq-to-sql, we must build DataContext include hundred or more tables. That make problem for program when you create an object from datacontext, that object give big amount of memory. also we cant divide datacontext to datacontexts(cuz relation between tables) so "How manage datacontext and memory"?

    Read the article

  • md5hash performance with big files for check copy files in shared folder

    - by alhambraeidos
    Hi all, My app Windows forms .NET in Win XP copy files pdfs in shared network folder in a server win 2003. Admin user in Win2003 detects some corrupt files pdfs, in that shared folder. I want check if a fileis copied right in shared folder Andre Krijen says me the best way is to create a MD5Hash of original file. When the file is copied, verify the MD5Hash file of the copied one with the original one. I have big pdf files. apply md5 hash about big file, any performance problem ?? If I only check (without generate md5 hash) Length of files (original and copied) ?? Thanks in advanced.

    Read the article

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

  • What to do with a big image that's slowing website loading down significantly

    - by Dave
    Hi I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns. This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time? I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way? Thanks -edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >