Search Results

Search found 1536 results on 62 pages for 'dr monkey'.

Page 54/62 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • How to apply or chain multiple matching templates in XSLT?

    - by Ignatius
    I am working on a stylesheet employing many templates with match attributes: <xsl:template match="//one" priority="0.7"> <xsl:param name="input" select="."/> <xsl:value-of select="util:uppercase($input)"/> <xsl:next-match /> </xsl:template> <xsl:template match="/stuff/one"> <xsl:param name="input" select="."/> <xsl:value-of select="util:add-period($input)"/> </xsl:template> <xsl:function name="util:uppercase"> <xsl:param name="input"/> <xsl:value-of select="upper-case($input)"/> </xsl:function> <xsl:function name="util:add-period"> <xsl:param name="input"/> <xsl:value-of select="concat($input,'.')"/> </xsl:function> What I would like to do is be able to 'chain' the two functions above, so that an input of 'string' would be rendered in the output as 'STRING.' (with the period.) I would like to do this in such a way that doesn't require knowledge of other templates in any other template. So, for instance, I would like to be able to add a "util:add-colon" method without having to open up the hood and monkey with the existing templates. I was playing around with the <xsl:next-match/> instruction to accomplish this. Adding it to the first template above does of course invoke both util:uppercase and util:add-period, but the output is an aggregation of each template output (i.e. 'STRINGstring.') It seems like there should be an elegant way to chain any number of templates together using something like <xsl:next-match/>, but have the output of each template feed the input of the next one in the chain. Am I overlooking something obvious?

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Daisy-chainable DisplayPort Monitors

    - by thepurplepixel
    I am the proud new owner of a MacBook Pro with a mini-DisplayPort. My desk setup used to allow me to position the screen of my old MacBook beside an external monitor, essentially allowing dual-head. It was also advantageous that my old MBP and my external monitor had the same resolution, 1440x900. Now, I'm searching for a set of two monitors that I can use a HengeDock with. Unfortunately, the MacBook Pro suffers from having only one mini-DisplayPort. Looking up the spec for DisplayPort 1.2 (which the MBP supports), DisplayPort daisy chaining is supported. What I'm looking for is a monitor that has two DisplayPorts so I can daisy chain two monitors off the single mini-DisplayPort. What I don't want is a USB-based video solution. I need full acceleration on both monitors; an external video card won't cut it. I hope I don't have to wait a few years for these monitors to come out. TL;DR: I need two monitors with two DisplayPorts each that I can daisy-chain. Thanks!

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • vmware esxi 5, cant create snapshots and consolidate fails, how to delete old or consolidate redo logs?

    - by Scott Szretter
    I have a VM that seems to be working ok, but when VMWare DR (or I) tries to create a snap shot, it fails, and when I view the summary page of the VM it has a warning at the top showing that the disks need to be consolidated. So I go to snapshot manager for the VM and choose consolidate (in snapshot manager, there are no snapshots actually listed by the way). If fails with this error: This virtual machine has 255 or more redo logs in a single branch of its snapshot tree. The maximum supported limit has been reached, creating new snapshots will not be allowed. To create new snapshots, please delete old snapshots or consolidate the redo logs. If I browse the data store (which has plenty of free space, 2 TB and this vm is under 40gb), in the vm folder, I do in fact see a bunch of files, numbered all the way to 0255: myvm-000255-ctk.vmdk myvm-000255-delta.vmdk myvm-000255.vmdk How can I clean all this up? Is there an SSH command line command or can I delete some of the files safely? Thanks!

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • Thunderbird 11.0.1 and Lightning 1.3: How do I propose a different time for a meeting?

    - by seaao
    This all happens on my x64 Linux workstation btw. tl;dr: My colleague invited some people and me for a meeting. The meeting was scheduled a week to late. I -wrongfully- accepted. How do I propose a new time? To explain a bit more in detail: I received a meeting request in my mailbox. Thunderbird is so nice to let me accept or reject it, and after I click the button to accept, it is directly added to my calendar. But when I double click on the meeting to edit it, I get a function-wise scaled-down version of the meeting: The only settings I can alter are whether I want reminders, and if I go at all. Trying to drag it to another day doesn't work either: My calendar behaves like read-only (which it isn't btw). There are several questions (without answer...) to be found on Stack Overflow and on the Thunderbird knowledge base about using lightning. But I get the idea that I'm one of the few who won't comply with the team even before the meeting is started. My googling revealed no bugs or feature requests in the direction I'm thinking. A link to an explanation how to achieve this, or another perspective about how to reach the desired goal (meeting with my colleagues and me) would be mostly welcome!

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • HP Laptop recognizes hard drive just long enough to install windows

    - by Joe
    I have an HP laptop, DV6500 (CTO). It refused to boot one day, so I ran some diagnostics (a friend lent me "Hirens Boot Disk", "UBCD" and "PC DR 6"). Everything passed, except for the hdd. I replaced the HDD with a used drive of unknown condition. Installed windows with no problems. Installed the wireless driver, tried to reboot ... no luck. So I went to Best Buy, bought a brand new Western Digital 320gb HDD. Put it in the machine, installed windows (vista home premium). Installed the wired networking driver. Tried to reboot. No luck. Put the first hdd back in the machine, reinstalled windows. Started to install some drivers, went to reboot, and the machine won't come back to life. Put the second hdd in the machine, rinse wash and repeat. I've replaced the memory, even though it passed diagnostics. Problem exists with both brand new memory, and old memory. The BIOS recognizes the hard drive. The computer freezes directly after the bios splash screen, and there is no hard drive activity light. I've tried two linux live distros (gentoo and ubuntu). Neither would run on this laptop, but will on a different HP laptop. UBCD and Hirens Boot Disk both ran, as did PC Doctor 6 which refuses to test anything (gets stuck at "enumerating hard disks"). Is there anything else I can try?

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • What to Expect in Rails 4

    - by mikhailov
    Rails 4 is nearly there, we should be ready before it released. Most developers are trying hard to keep their application on the edge. Must see resources: 1) @sikachu talk: What to Expect in Rails 4.0 - YouTube 2) Rails Guides release notes: http://edgeguides.rubyonrails.org/4_0_release_notes.html There is a mix of all major changes down here: ActionMailer changes excerpt: Asynchronously send messages via the Rails Raise an ActionView::MissingTemplate exception when no implicit template could be found ActionPack changes excerpt Added controller-level etag additions that will be part of the action etag computation Add automatic template digests to all CacheHelper#cache calls (originally spiked in the cache_digests plugin) Add Routing Concerns to declare common routes that can be reused inside others resources and routes Added ActionController::Live. Mix it in to your controller and you can stream data to the client live truncate now always returns an escaped HTML-safe string. The option :escape can be used as false to not escape the result Added ActionDispatch::SSL middleware that when included force all the requests to be under HTTPS protocol ActiveModel changes excerpt AM::Validation#validates ability to pass custom exception to :strict option Changed `AM::Serializers::JSON.include_root_in_json' default value to false. Now, AM Serializers and AR objects have the same default behaviour Added ActiveModel::Model, a mixin to make Ruby objects work with AP out of box Trim down Active Model API by removing valid? and errors.full_messages ActiveRecord changes excerpt Use native mysqldump command instead of structure_dump method when dumping the database structure to a sql file. Attribute predicate methods, such as article.title?, will now raise ActiveModel::MissingAttributeError if the attribute being queried for truthiness was not read from the database, instead of just returning false ActiveRecord::SessionStore has been extracted from Active Record as activerecord-session_store gem. Please read the README.md file on the gem for the usage Fix reset_counters when there are multiple belongs_to association with the same foreign key and one of them have a counter cache Raise ArgumentError if list of attributes to change is empty in update_all Add Relation#load. This method explicitly loads the records and then returns self Deprecated most of the 'dynamic finder' methods. All dynamic methods except for find_by_... and find_by_...! are deprecated Added ability to ActiveRecord::Relation#from to accept other ActiveRecord::Relation objects Remove IdentityMap ActiveSupport changes excerpt ERB::Util.html_escape now escapes single quotes ActiveSupport::Callbacks: deprecate monkey patch of object callbacks Replace deprecated memcache-client gem with dalli in ActiveSupport::Cache::MemCacheStore Object#try will now return nil instead of raise a NoMethodError if the receiving object does not implement the method, but you can still get the old behavior by using the new Object#try! Object#try can't call private methods Add ActiveSupport::Deprecations.behavior = :silence to completely ignore Rails runtime deprecations What are the most important changes for you?

    Read the article

  • Starting programs from terminal then exiting terminal exits started programs?

    - by sherrellbc
    I really was unsure how to phrase the question title. What I mean is that when I use the terminal to start a program, most of the time when the terminal is closed it also exits the programs started from it. Now this makes sense if we look at it from a hierarchical standpoint of the terminal being the parent process which spawns child processes, and any halt of the parent causes subsequent halting of the children as well. However, I've noticed this to not always be the case. For example, I downloaded Sublime Text Editor and created a symlink in PATH. I can start this program by issuing a sublime command from the terminal, but subsequent closure of the terminal program does nothing to sublime. However, other times either the child process that was started it also closed or it hangs up and causes problems. tl;dr: Is it always the case that programs started from a closed parent process will be closed when the parent is exited? And if so, is there way to start a program from the terminal and then close the terminal without exiting the started process? The whole point here is to start programs from the terminal so I do not overly-populate my desktop with symlinks.

    Read the article

  • Zend Framework-where should this root.php file go for MVC?

    - by Joel
    Hi guys, I'm converting over a web-app to use the MVC structure of Zend Framework. I have a root.php include file that contains most of the database info, and some static variables that are used in the program. I'm not sure if some of this should be in the application.ini of in a model that is called by the init() function in a controller, or in the bootstrap or what? Any help would be much appreciated! root.php (include file at the top of every php page): <?php /*** //Configuration file */ ## Site Configuration starts ## define("SITE_ROOT" , dirname(__FILE__)); define("SITE_URL" , "http://localhost/monkeycalendarapp/monkeycalendarapp/public"); define('DB_HOST', "localhost"); define('DB_USER', "root"); define('DB_PASS', "xxx"); define('DB_NAME', "xxxxx"); define("PROJECT_NAME" , "Monkey Mind Manager (beta 2.2)"); //site title define("CALENDAR_WIDTH" , "300"); //left mini calendar width define("CALENDAR_HEIGHT" , "150"); //left mini calendar height $page_title = 'Event List'; $stylesheet_name = 'style.css'; //default stylesheet define("SITE_URL_AJAX" , SITE_URL . "/ajax-tooltip"); define("JQUERY" , SITE_URL . "/jquery-ui-1.7.2"); $a_times = array("12:00","12:30","01:00","01:30","02:00","02:30","03:00","03:30","04:00","04:30","05:00","05:30","06:00","06:30","07:00","07:30","08:00","08:30","09:00","09:30","10:00","10:30","11:00","11:30"); //PTLType Promotional timeline type $a_ptlType= array(1=>"Gigs","To-Do","Completed"); $a_days = array("Su","Mo","Tu","We","Th","Fr","Sa"); $a_timesMerd = array("12:00am","12:30am","01:00am","01:30am","02:00am","02:30am","03:00am","03:30am","04:00am","04:30am","05:00am","05:30am","06:00am","06:30am","07:00am","07:30am","08:00am","08:30am","09:00am","09:30am","10:00am","10:30am","11:00am","11:30am","12:00pm","12:30pm","01:00pm","01:30pm","02:00pm","02:30pm","03:00pm","03:30pm","04:00pm","04:30pm","05:00pm","05:30pm","06:00pm","06:30pm","07:00pm","07:30pm","08:00pm","08:30pm","09:00pm","09:30pm","10:00pm","10:30pm","11:00pm","11:30pm"); //Setting stylesheet for this user. $AMPM=array("am"=>"am","pm"=>"pm"); include(SITE_ROOT . "/includes/functions/general.php"); include(SITE_ROOT . "/includes/db.php"); session_start(); if(isset($_SESSION['userData']['UserID'])) { $s_userID = $_SESSION['userData']['UserID']; } $stylesheet_name = stylesheet(); ini_set('date.timezone', 'GMT'); date_default_timezone_set('GMT'); if($s_userID) { ini_set('date.timezone', $_SESSION['userData']['timezone']); date_default_timezone_set($_SESSION['userData']['timezone']); } ?>

    Read the article

  • XNA 4.0 draw a cube with DrawUserIndexedPrimitives method [on hold]

    - by Leggy7
    EDIT Since I read what Mark H suggested (thanks a lot, I found it very useful) I think my question can become clearer structured this way: Using XNA 4.0, I'm trying to draw a cube. Im using this method: GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>( PrimitiveType.LineList, primitiveList, 0, // vertex buffer offset to add to each element of the index buffer 8, // number of vertices in pointList lineListIndices, // the index buffer 0, // first index element to read 7 // number of primitives to draw ); I got the code sample from this page which simply draw a serie of triangles. I want to modify this code in order to draw a cube. I was able to slitghly move the camera so I can have the perception of solidity, I set the vertex array to contain the 8 points defining a cube. But I can't fully understand how many primitives I have to draw (last parameter) for each of PrimitiveType. So, I wasn't able to draw the cube (just some of the edges in a non-defined order). More in detail: to build the vertex index list, the sample used // Initialize an array of indices of type short. lineListIndices = new short[(points * 2) - 2]; // Populate the array with references to indices in the vertex buffer for (int i = 0; i < points - 1; i++) { lineListIndices[i * 2] = (short)(i); lineListIndices[(i * 2) + 1] = (short)(i + 1); } I'm ashamed to say I cannot do the same in the case of a cube. what has to be the size of the lineListIndices? how should I populate it? In which order? And how do these things change when I use a different PrimitiveType? In the code sample there are also another couple of calls which I cannot fully understand, which are: // Initialize the vertex buffer, allocating memory for each vertex. vertexBuffer = new VertexBuffer(graphics.GraphicsDevice, vertexDeclaration, points, BufferUsage.None); // Set the vertex buffer data to the array of vertices. vertexBuffer.SetData<VertexPositionColor>(pointList); and vertexDeclaration = new VertexDeclaration(new VertexElement[] { new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Color, VertexElementUsage.Color, 0) } ); that is, for VertexBuffer and VertexDeclaration I could not find and significant (monkey-like) guide. I reported them too because I think they could be involded in understanding things. I think I also have to understand something related to the order the vertexes are stored in the array. But actually I have no clue of what I should learn to have this function drawing a cube. So, if anybody could point me to the right direction, it wil be appreciated. Hope to have made myself clear this time

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • How to install Delorme StreetAtlas (any version) + GPS inside VirtualBox VM?

    - by hotei
    When I try to run the install program I get a popup message that says the installer program is not a valid executable. Background: I want a GPS with maps on my laptop running Ubuntu 10.4LTS. Unfortunately I can't find a decent native Linux GPS solution with 50 state US street level coverage. I have VirtualBox VMs available for WinXP and Win7 (among others). The VMs work fine with MicroSoft Streets and Trips (2010) and MapNGo 5 (a very! old Delorme product), but while both these products support GPS, they don't support the Earthmate LT-40 USB GPS I already have. I've got pretty much every Delorme Street Atlas they've released in the last decade and none of them will install in a VM. Any help would be much appreciated. Clarification: I've installed the Delorme products from these CDs before and the disks are fine - as long as installation is done on a "physical" machine. Added: I've tried install from an iso as well as the real CD. No difference in result (setup.exe is not a valid executable) The WinXP is SP-2 (held back on purpose at this point - I'll snapshot and fork a later SP to test). The Win2K is SP-6a. Win7(32) VM is whatever updates came out last week. The USB setup is working at least to the point where the GPS device is active in the device list (has an x in the box). At this point its not relevant because the program that needs to read it can't even be installed. Added 9-19: Added wine as harrymc suggested. Initial result was no change. Here's wines error message. The file '/media/Disk1/setup.exe' is not marked as executable. If this was downloaded or copied form an untrusted source, it may be dangerous to run. For more details, read about the executable bit. At first I thought the execute bit was the problem, but looking at several other windows CDs I see that the execute bit is not set on their exe files (which install to VM without error). Still it was worth a shot so I copied the StreetAtlas 9 DVD to my hard disk, changed the on-disk exe files to have the execute bit set and tried to install again. This time the install via wine got me through the installation process. When I start the program it bombs immediately, so we haven't made much real progress so far. I very much prefer the VM solution to wine, so I'm going back to that for now. To recap the VM situation, using an updated XP with SP3 and all recommended hotfixes: StreetAtlas 2009 USA fails with "not marked as executable". StreetAtlas 2007 USA fails with "not marked as executable". StreetAtlas 9 (copyright 2001) fails with "not marked as executable". SteeetAtlas (copyright 1991) fails with "not marked as executable" Delorme Topo 4 (copyright 2002) fails with "not marked as executable". Just about ready to give up. So I switched from XP VM to Win7 VM and tried StreetAtlas 2009 again. This time it installs. Earthmate USB GPS works. WTH? I feel like the monkey who just wrote a line of Shakespear. I'm smiling because it worked, but I have no clue why. I'm awarding the bounty to harrymc because wine did give some useful insight into the problem and a +1 to goyiux as thanks for helping.

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • Designing an email system to guarantee delivery

    - by GlenH7
    We are looking to expand our use of email for notification purposes. We understand it will generate more inbox volume, but we are being selective about which events we fire notification on in order to keep the signal-to-noise ratio high. The big question we are struggling with is designing a system that guarantees that the email was delivered. If an email isn't delivered, we will consider that an exception event that needs to be investigated. In reality, I say almost guarantees because there aren't any true guarantees with email. We're just looking for a practical solution to making sure the email got there and experiences others have had with the various approaches to guaranteeing delivery. For the TL;DR crowd - how do we go about designing a system to guarantee delivery of emails? What techniques should we consider so we know the emails were delivered? Our biggest area of concern is what techniques to use so that we know when a message is sent out that it either lands in an inbox or it failed and we need to do something else. Additional requirements: We're not at the stage of including an escalation response, but we'll want that in the future or so we think. Most notifications will be internal to our enterprise, but we will have some notifications being sent to external clients. Some of our application is in a hosted environment. We haven't determined if those servers can access our corporate email servers for relaying or if they'll be acting as their own mail servers. Base design / modules (at the moment): A module to assign tracking identification A module to send out emails A module to receive delivery notification (perhaps this is the same as the email module) A module that checks sent messages against delivery notification and alerts on undelivered email. Some references: Atwood: Send some email Email Tracking Some approaches: Request a response (aka read-receipt or Message Disposition Notification). Seems prone to failure since we have cross-compatibility issues due to differing mail servers and software. Return receipt (aka Delivery Status Notification). Not sure if all mail servers honor this request or not Require an action and therefore prove reply. Seems burdensome to force the recipients to perform an additional task not related to resolving the issue. And no, we haven't come up with a way of linking getting the issue fixed to whether or not the email was received. Force a click-through / Other site sign-in. Similar to requiring some sort of action, this seems like an additional burden and will annoy the users. On the other hand, it seems the most likely to guarantee someone received the notification. Hidden image tracking. Not all email providers automatically load the image, and how would we associate the image(s) with the email tracking ID? Outsource delivery. This gets us out of the email business, but goes back to how to guarantee the out-sourcer's receipt and subsequent delivery to the end recipient. As a related concern, there will be an n:n relationship between issue notification and recipients. The 1 issue : n recipients subset isn't as much of a concern although if we had a delivery failure we would want to investigate and fix the core issue. Of bigger concern is n issues : 1 recipient, and we're specifically concerned in making sure that all n issues were received by the recipient. How does forum software or issue tracking software handle this requirement? If a tracking identifier is used, Where is it placed in the email? In the Subject, or the Body?

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >