Search Results

Search found 6528 results on 262 pages for 'appfabric cache'.

Page 103/262 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • VBUG Spring Conference, 28th and 29th March in Reading

    - by Eric Nelson
    I presented at VBUG last year and can confirm that they put on a really good event. This year I stood aside for my “replacement” Steve Plank to work his magic. Worth checking out… VBUG SPRING CONFERENCE 28/29 March 2011 Wokefield Park, Mortimer, Reading RG7 3AH Day One (Mon 28 March): Developing SharePoint 2010 with Visual Studio 2010 - Dave McMahon Cache Out with Windows Server AppFabric – Phil Pursglove Extending your Corporate Network in to the Windows Azure Data Centre with Windows Azure Connect – Steve Plank Silverlight Development on Windows Phone 7 - Andy Wigley Day Two (Tues 29 March): Self Service BI for your users, but what does that mean for you? - Andrew Fryer Design Patterns – Compare and Contrast – Gary Short Projecting your corporate identity to the cloud – Steve Plank May the Silverlight 4 be with you – Richard Costall The Step up to ALM – an Introduction to Visual Studio 2010 TFS for the Visual Sourcesafe User - Richard Fennell For more information go to http://cms.vbug.net (It isn’t free but it is high quality)

    Read the article

  • FREE Windows Azure Platform Compute and Storage through the Cloud Essentials Pack for Partners

    - by Eric Nelson
    It can be difficult to find something to look forward to in January – but this year it was a little easier as a) I got lots of great Xbox 360 games and b) the Windows Azure Platform element of the Cloud Essentials Pack for Microsoft Partner Network partners went live. I have previously explained what the Cloud Essentials Pack is and how you can access – but at the time I couldn’t share the details of the Windows Azure Platform element. The Windows Azure Platform element is now available. It gives you each month, for FREE: Windows Azure: 750 hours of extra small compute instance 25 hours of small compute instance 3GB of storage and 250,000 storage transactions SQL Azure: 1 SQL Azure Web Edition database (5GB) Windows Azure AppFabric: App Fabric with 100,000 Access Control transactions and 2 Service Bus connections Plus: Data Transfer:  3GB in and 6GB out (More details of the offer) To activate this offer You need to: Sign your company up to Microsoft Platform Ready (NB: there are other routes to get this benefit – but I know about MPR) Read about Microsoft Platform Ready Visit http://www.microsoftcloudpartner.com/ and sign up.

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • Access Control Management Tool ACM.exe

    - by kaleidoscope
    The Access Control Management Tool (Acm.exe) is a command-line tool you can use to perform management operations (CREATE, UPDATE, GET, GET ALL, and DELETE) on the AppFabric Access Control entities (scopes, issuers, token policies, and rules). Basic Syntax The command line for Acm.exe follows the basic pattern of verb-noun. For example: acm.exe <command> <resource> [-option:<option value>] This tool will automatically generate random keys, which helps ensure that they can't easily be guessed by an attacker. Note that ACM.EXE is a thin wrapper around a REST Web Service (the AC management service). That helps to remember the commands it accepts, which are the typical resource management commands for a REST service: · Get(All) · Create · Update · Delete ACM.EXE.config file can be used to configure Host, Service and the Management key for a Service Namespace. Geeta, G

    Read the article

  • Integrating Azure ServiceBus and SharePoint 2010

    - by Sahil Malik
    SharePoint 2010 Training: more information My new article is finally online. I had been waiting for this for a while. The thing is, AppFabric became .NET 4, and left SharePoint 2010 behind. But fear not, we have REST API. But that brings up interesting challenges of how we can integrate Azure Service Bus with SharePoint 2010 (yes 2010, not vNext – I’m not giving NDA information out you fool), the design patterns you can use, figuring out challenging issues like security, sessions, and just app design patterns instead. Well, I hope you like my next article, SharePoint Applied: Azure ServiceBus and SharePoint 2010 Enjoy! Read full article ....

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • Ubuntu 12.04 upgrade and thunderbird

    - by Dcm1405
    After applying the suggested updates (179) an error message at the very end of the process suggested me to run apt-get install -f. Since it is a fairly new Ubuntu install (x86) I didn't setup anything in Thunderbird yet. Different error messages (see details) were generated with the -f process: ~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: thunderbird Suggested packages: latex-xft-fonts The following packages will be upgraded: thunderbird 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/20.8 MB of archives. After this operation, 594 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 170457 files and directories currently installed.) Preparing to replace thunderbird 11.0.1+build1-0ubuntu2 (using .../thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb) ... Unpacking replacement thunderbird ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: invalid code lengths set' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives /thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/thunderbird/libxul.so' Errors were encountered while processing: /var/cache/apt/archives/thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • GPG Invalid Signature

    - by user46421
    I am having problems with the following (in an attempt to remove hyperlinks, I have removed one of the "/" from the addresses): W: GPG error: http://archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG B725097B3ACC3965 Launchpad lffl W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG 4874D3686E80C6B7 Launchpad PPA for Banshee Team W: GPG error: http://archive.getdeb.net jaunty-getdeb Release: The following signatures were invalid: BADSIG A8A515F046D7E7CF GetDeb Archive Automatic Signing Key <[email protected]> W: GPG error: http://badgerports.org lucid Release: The following signatures were invalid: BADSIG C90F9CB90E1FAD0C Jo Shields <[email protected]> W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG 976B5901365C5CA1 Launchpad PPA for transmissionbt W: Failed to fetch http://ppa.launchpad.net/dlecan/openjdk/ubuntu/dists/oneiric/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/dlecan/openjdk/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sevenmachines/flash/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu/dists/oneiric/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found I have tried the following solutions which were in a closed case titled "The following signatures were invalid": First of all try sudo apt-get clean sudo apt-get update && sudo apt-get upgrade Some ISPs cache the packages and errors like these are reported then. If the above commands don't work, try sudo apt-get update -o Acquire::http::No-Cache=True and again sudo apt-get update && sudo apt-get upgrade If it still doesn't work, sudo apt-get update -o Acquire::BrokenProxy=true sudo apt-get update && sudo apt-get upgrade

    Read the article

  • Fix for poor hd playback for 11.04 upwards

    - by mark kirby
    Hi guys ive seen loads of posts on this site about poor 720/1080p playback in recent ubuntu versions I had this problem and fixed it so I thought id share it with everyone.... 1 install mplayer 2 install smplayer frontend {in software center} 3 open smplayer 4 go to "OPTIONS" then "PREFRENCES" then "GENRAL" 5 if you have a nvidia card choose "OUTPUT DRIVER" and select "VDPAU" {for ATI or AMD choose xv (0 - ATI Radeon AVIVO video) I dont know if this will work as my card is nvidia but it should) 6 go to performance on the left hand side and set both local and streaming cache to 99999 (this may also fix dvd playback if you set that cache aswell} 7 check the box for "ALLOW HARD FRAME DROP" and set "LOOP FILTER" to skip only on HD 8 Set the "THREDS FOR DECODING OPTION TO THE NUMBER OF CORES YOUR CPU HAS IF YOU HAVE MORE THAN ONE CPU ADD UP ALL THE CORES FOR BEST PERFORMANCE" 9 Enjoy you HD movies again on ubuntu...... I have a pretty avrage machine heres my spec.... 2x Pentium 4 ht 3 ghz Stock dell power and motherboard GFORCE 310 HDMI 24 inch full HD tv as a monitor so any one with dule core cpu should have no problems getting this to work. hope this helps someone out.

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • Video lags/freezes in SMPlayer and VLC

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • Why does video playback lag/freeze when I go into full-screen mode?

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • So, BizTalk 2010 Beta is out &hellip; wait, no it&rsquo;s not &hellip; wait

    - by Enrique Lima
    Over the last couple of days we have seen posts and “rumors” of the Beta availability.  There was a link to the bits from the Download Center, but then they were not. Documentation for it is available now at: BizTalk Server 2010 Documentation – Beta Microsoft BizTalk Server 2010 ESB Toolkit Documentation – Beta BizTalk RFID Server 2010 and BizTalk RFID Mobile 2010 Documentation – Beta But what about the bits?!? From the Biztalk Server Team blog: “We will be announcing the public Beta of BizTalk Server 2010 at the Application Infrastructure Virtual Launch tomorrow (Thursday, May 20th, 2010 at 8:30 AM PST) with planned RTM in Q3 of 2010. BizTalk Server 2010 aligns with the latest Microsoft platform releases, including SQL Server 2008 R2, Visual Studio 2010 and SharePoint 2010, and will integrate with Windows Server AppFabric and with .NET 4. At this virtual launch event we will disclose details on new features and capabilities in BizTalk Server 2010 though presentations, whitepapers, videos and recorded demos. Please join us tomorrow for an exciting launch! The BizTalk Team” Keep your eyes and ears at the ready.

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Removing spam external links after pharma hack?

    - by Beatchef
    Back in February my work's site was attacked by a pharma hack at the shared hosting end. I managed to find the placed file and the reference to run it in one of our files. I deleted this file, deleted and redownloaded all of the plugins and themes and reinstalled Wordpress. However I could never find the database entries no matter what I have read up on. Searching for known entries or for drug names backwards etc. On the Google and Bing end I have managed to deny and delete the entries and cache of most if not all of the bad links that the hack managed to instantly SEO to death (why don't these guys work legit and make more money?) However the one thing that is remaining is external links on the homepage that are invisible except when the site is viewed in google cache or scanned with unmaskparasites.com (and says that the external links are safe even though they're obviously not!). http://www.UnmaskParasites.com/security-report/?page=kmcharityteam.co.uk All sorts of website scans say there's nothing wrong with it and I can't find the source of the links in the header or footer or anywhere in the theme. I've searched for the links in the database but no use there either and they change every day so really I'd have to be looking for a generator? Does anybody have any advice or a solution for removing these links? Thanks!

    Read the article

  • How can I instruct nautilus to pre-generate PDF thumbnails?

    - by Glutanimate
    I have a large library of PDF documents (papers, lectures, handouts) that I want to be able to quickly navigate through. For that I need thumbnails. At the same time however, I see that the ~/.thumbnails folder is piling up with thumbs I don't really need. Deleting thumbnail junk without removing the important thumbs is impossible. If I were to delete them, I'd have to go to each and every folder with important PDF documents and let the thumbnail cache regenerate. I would love to able to automate this process. Is there any way I can tell nautilus to pre-cache the thumbs for a set of given directories? Note: I did find a set of bash scripts that appear to do this for pictures and videos, but not for any other documents. Maybe someone more experienced with scripting might be able to adjust these for PDF documents or at least point me in the right direction on what I'd have to modify for this to work with PDF documents as well. Edit: Unfortunately neither of the answers provided below work. See my comments below for more information. Is there anyone who can solve this?

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Kubuntu apt-get -f install error

    - by ShaggyInjun
    I am seeing an error while running apt-get -f install. Can somebody help me out .. venkat@ubuntu:~/Downloads$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 256 not upgraded. 109 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 274641 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2- 0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2- 1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >