Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 46/1205 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Mountable online storage (no syncing)

    - by Sam
    I have a Linux VPS that I would like to turn into a media server. Like most cheap VPS's, it has a fairly small storage capacity. What I would like to do is attach the box to an online backup system such as SpiderOak or DropBox, where the files would reside and be directly accessible to either a webserver or media server software. Since the VPS hdd is small, I do not want the files to be synced to it. I would like a storage system that is online only. Ideally mountable like a network drive. Are there any services that suit my needs, or workarounds for services such as SpiderOak that do not require syncing?

    Read the article

  • Bacula Director and Storage in LAN

    - by B14D3
    I have two networks LAN and DMZ.. Machines in DMZ are accesible from internet ( only over http). In LAN I have servers that see all LAN and all DMZ machines but machinse from DMZ don't see any LAN servers. Machines in LAN have access only to all LAN and DMZ, no direct access to internet and no access from internet. DMZ <------ LAN DMZ ----X--->LAN I'm planning to configure Bacula as major backup system. My plan is to install Bacula Director and Storage deamon on the same server in LAN for safety reasons. So my question is: Will this configuration work, is it posible for bacula director and storage deamon installed on server in LAN to makes backup servers that are in my DMZ? Or in this network configuration Bacula should be in DMZ? (If yes will I can backup with it servers in LAN ?)

    Read the article

  • Have both domain and non-domain users use NetApp CIFS storage

    - by zladuric
    My specific use case is that I have a NetApp CIFS storage that's in the domain, say, intranet. But I also have one hyper-v host that's not in the domain. I can't allow it into the domain, but I need to create guests whose VHD is on the storage. How can I achieve this? When I simply connect to a CIFS share and create a VHD there, it works, but when I try to add that VHD to the virtual machine, I get a "Failed to set folder permissions" error - probably due to folder ownership being in the domain admin. Is there a way around this? So both domain and non-domain users use the same directory?

    Read the article

  • Intel Rapid Storage Technology service always crashes

    - by Massimo
    I'm running Windows 7 x64 on a system based on an Asus Z87-Deluxe motherboard; the storage is configured for RAID mode; there is a single SSD drive for the O.S. and two 4-TB disks in a RAID 1 setup for the data. I've installed the latest version of Intel's Rapid Storage Technology drivers, 12.8.0.1016. The program complains about its service not being running, and the service is actually stopped; if I try to start it, it crashes. I've already tried reinstalling the package, but nothing changed. All the disks work correctly, but the RST program is unusable. How can I fix this?

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Is there a straightforward way to have a ThreadStatic instance member?

    - by Dan Tao
    With the ThreadStatic attribute I can have a static member of a class with one instance of the object per thread. This is really handy for achieving thread safety using types of objects that don't guarantee thread-safe instance methods (e.g., System.Random). It only works for static members, though. Is there some corresponding attribute that provides the same functionality, but for instance members? In other words, that allows me to have one instance of the object, per thread, per instance of the containing class?

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • SQL SERVER – 2012 – List All The Column With Specific Data Types in Database

    - by pinaldave
    5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. I use every script which I write on this blog, the matter of the fact, I write only those scripts here which I was using at that time. It is quite possible that as time passes by my needs are changing and I change my script. Here is the updated script of this subject. If there are any user data types, it will list the same as well. SELECT s.name AS 'schema', ts.name AS TableName, c.name AS column_name, c.column_id, SCHEMA_NAME(t.schema_id) AS DatatypeSchema, t.name AS Datatypename ,t.is_user_defined, t.is_assembly_type ,c.is_nullable, c.max_length, c.PRECISION, c.scale FROM sys.columns AS c INNER JOIN sys.types AS t ON c.user_type_id=t.user_type_id INNER JOIN sys.tables ts ON ts.OBJECT_ID = c.OBJECT_ID INNER JOIN sys.schemas s ON s.schema_id = t.schema_id ORDER BY s.name, ts.name, c.column_id I would be very interested to see your script which lists all the columns of the database with data types. If I am missing something in my script, I will modify it based on your comment. This way this page will be a good bookmark for the future for all of us. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Downloading specific video renditions in WebCenter Content

    - by Kyle Hatlestad
    I recently had a question come up on one of my previous blog articles about downloading a specific video rendition.  When accessing image renditions, you simply need to pass in the 'Rendition=<rendition name>' parameter on the GET_FILE service and it will be returned.  But when you try that with videos, you get the error message, "Unable to download '<Content ID>'. The rendition or attachment '<Rendition Name>' could not be found in the list manifest of the revision with internal revision ID '<dID>'. Through the interface, it exposes the ability to download, but utilizes the Content Basket to bundle one or more videos and download them as a zip.   I had never tried this with videos, but thought they had worked the same way.  Well, it turns out you need to pass in an extra parameter in the case of videos.  So if you pass in parameter of 'AuxRenditionType=media', that will allow the GET_FILE service to download the video (e.g. http://server/cs/idcplg?IdcService=GET_FILE&dID=11012&dDocName=WCCBASE9010812&allowInterrupt=1 &Rendition=QuickTime&AuxRenditionType=media).  And if you haven't seen the David After Dentist video, I'd highly recommend it! 

    Read the article

  • Guidance in naming awkward domain-specific objects?

    - by GlenH7
    I'm modeling a chemical system, and I'm having problems with naming my objects within an enum. I'm not sure if I should use: the atomic formula the chemical name an abbreviated chemical name. For example, sulfuric acid is H2SO4 and hydrochloric acid is HCl. With those two, I would probably just use the atomic formula as they are reasonably common. However, I have others like sodium hexafluorosilicate which is Na2SiF6. In that example, the atomic formula isn't as obvious (to me) but the chemical name is hideously long: myEnum.SodiumHexaFluoroSilicate. I'm not sure how I would be able to safely come up with an abbreviated chemical name that would have a consistent naming pattern. From a maintenance point of view, which of the options would you prefer to see and why? Some details from comments on this question: Audience for the code will be just programmers, not chemists. I'm using C#, but I think this question is more interesting when ignoring the implementation language I'm starting with 10 - 20 compounds and would have at most 100 compounds. The enum is to facilitate common calculations - the equation is the same for all compounds but you insert a property of the compound to complete the equation. For example, Molar mass (in g/mol) is used when calculating the number of moles from a mass (in grams) of the compound. Another example of a common calculation is the Ideal Gas Law and its use of the Specific Gas Constant

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • How do I handle specific tile/object collisions?

    - by Thomas William Cannady
    What do I do after the bounding box test against a tile to determine whether there is a real collision against the contents of that tile? And if there is, how should I move the object in response to that collision? I have a small object, and test for collisions against the tiles that each corner of it is on. Here's my current code, which I run for each of those (up to) four tiles: // get the bounding box of the object, in world space objectBounds = object->bounds + object->position; if ( (objectBounds.right >= tileBounds.left) && (objectBounds.left <= tileBounds.right) && (objectBounds.top >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) { // perform specific test to see if it's a left, top , bottom // or right collision. If so, I check to see the nature of it // and where I need to place the object to respond to that collision... // [THIS IS THE PART THAT NEEDS WORK] // if( lastkey==keydown[right] && ((objectBounds.right >= tileBounds.left) && (objectBounds.right <= tileBounds.right) && (objectBounds.bottom >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) ) { object->position.x = tileBounds.left - objectBounds.width; } // etc.

    Read the article

  • Get a culture specific list of month names

    - by erwin21
    A while ago I found a clever way to retrieve a dynamic culture specific list of months names in C# with LINQ. 1: var months = Enumerable.Range(1, 12) 2: .Select(i => new 3: { 4: Month = i.ToString(), 5: MonthName = new DateTime(1, i, 1).ToString("MMMM") 6: }) 7: .ToList(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It’s fairly simple, for a range of numbers from 1 to 12 a DateTime object is created (year and day doesn’t matter in this case), then the date time object formatted to a full month name with ToString(“MMMM”). In this example an anonymous object is created with a Month and MonthName property. You can use this solution to populate your dropdown list with months or to display a user friendly month name.

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • How to Enable User-Specific Wireless Networks in Windows 7

    - by The Geek
    Wireless network settings in Windows 7 are global across all users, but there’s a little-known option that lets you switch them to per-user, so each user has access to only the networks they are allowed to connect to. Here’s how it all works. How is this useful? Maybe you want to prevent a particular user from accessing the internet—if you don’t give them the wireless password, they won’t be able to get online. This could be very useful if you’ve got mini-people playing games on the family PC, but you don’t want them getting online Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Methodology To Determine Cause Of User Specific Error

    - by user3163629
    We have software that for certain clients fails to download a file. The software is developed in Python and compiled into an Windows Executable. The cause of the error is still unknown but we have established that the client has an active internet connection. We suspect that the cause is due to the clients network setup. This error cannot be replicated in house. What technique or methodology should be applied to this kind of specific error that cannot be replicated in house. The end goal is to determine the cause of this error so we can move onto the solution. For example; Remote Debugging: Produce a debug version of the software and ask the client to send back a debug output file. This involves alot of time (back and forth communication) and requires the client to work and act in a timely manor to be successful. In-house debugging: Visit the client and determine their network setup, etc. Possibly develop a series of script tests before hand to run on the clients computer under the same network. Other methodologies and techniques I am not aware of?

    Read the article

  • Music player with a few specific requirements

    - by Jordan Uggla
    I am looking for a music player with a few specific requirements: Must have a search function that whittles down results as you type, searching the entire library. Must start playing a song when double clicked, and not continue to another song when that song finishes. Must be approachable and immediately usable by people completely unfamiliar with the program. I think this is mostly covered by the first two requirements being met. I've tried many players but unfortunately every one has failed to meet at least one of the requirements. Rhythmbox meets 1 and 3, but continues to the next search result after the song which was double clicked ends. Banshee is basically the same as Rhythmbox. While it has an option to "Stop when finished" this cannot (as far as I can tell) be made the default when double clicking a song. Audacious (as far as I can tell) fails at 1. Muine meets requirements 1 and 2, but unfortunately I couldn't make the search dialog always shown like it is with Rhythmbox / Banshee which, despite its very simple interface, made Muine incomprehensible to people trying to use it for the first time. Amarok I could not configure to meet requirement 1, but I think it's likely I was just missing something, and with its configurability I'm confident that I can set it up to meet requirements 2 and 3.

    Read the article

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • Trying to wrap my head around class structure for domain-specific language

    - by svaha
    My work is mostly in embedded systems programming in C, and the proper class structure to pull this off eludes me. Currently we communicate via C# and Visual Basic with a large collection of servos, pumps, and sensors via a USB-to-CAN hid device. Right now, it is quite cumbersome to communicate with the devices. To read the firmware version of controller number 1 you would use: SendCan(Controller,1,ReadFirmwareVersion) or SendCan(8,1,71) This sends three bytes on the CAN bus: (8,1,71) Connected to controllers are various sensors. SendCan(Controller,1,PassThroughCommand,O2Sensor,2,ReadO2) would tell Controller number 1 to pass a command to O2 Sensor number 2 to read O2 by sending the bytes 8,1,200,16,2,0 I would like to develop a domain-specific language for this setup. Instead of commands issued like they are currently, commands would be written like this: Controller1.SendCommand.O2Sensor2.ReadO2 to send the bytes 8,1,200,16,0 What's the best way to do this? Some machines have 20 O2 Sensors, others have 5 controllers, so the numbers and types of controllers and sensors, pumps, etc. aren't static.

    Read the article

  • Prevent Nautilus from displaying thumbnails on a specific mount

    - by Zakhar
    I have written a filesystem over Fuse to access a remote pseudo-NAS (the French "Freebox V6", I'll soon publish it as GPL3... when it's a little bit more polished!). The NAS is connected to a home ADSL, thus data comes down at the upload speed of ADSL, which is at best 1Mbps. My mount works fine (read-only at the moment), but Nautilus sees the mountpoint (and all sub-directories) as a "local" filesystem and tries to make thumbnails. As I have a directory full of images, this is quite horrible, because Nautilus then opens ALL the images to try to display the thumbnail. I could switch the Nautilus preferences to "Never" for thumbnails, but then I'll loose thumbnails on my "real" local filesystem. So the question is: with the preference "Only for local filesystem", how can I instruct Nautilus that my mountpoint is in fact NOT a local mount so that it will stop trying to draw thumbnails on that specific mount, but continue "thumbnailing" on mounts that are really local? Edit note: the same things happens if you use "standard worldwide" mounts such as sshfs, davfs,... as long as you mount over a relatively slow network (ADSL) and have images/movies on your mounted tree.

    Read the article

  • Requiring a specific order of compilaiton

    - by Aber Kled
    When designing a compiled programming language, is it a bad idea to require a specific order of compilation of separate units, according to their dependencies? To illustrate what I mean, consider C. C is the opposite of what I'm suggesting. There are multiple .c files, that can all depend on each other, but all of these separate units can be compiled on their own, in no particular order - only to be linked together into a final executable later. This is mostly due to header files. They enable separate units to share information with each other, and thus the units are able to be compiled independently. If a language were to dispose of header files, and only keep source and object files, then the only option would be to actually include the unit's meta-information in the unit's object file. However, this would mean that if the unit A depends on the unit B, then the unit B would need to be compiled before unit A, so unit A could "import" the unit B's object file, thus obtaining the information required for its compilation. Am I missing something here? Is this really the only way to go about removing header files in compiled languages?

    Read the article

  • SQL SERVER – Script to Update a Specific Column in Entire Database

    - by Pinal Dave
    Last week, I have received a very interesting question and I find in email and I really liked the question as I had to play around with SQL Script for a while to come up with the answer he was looking for. Please read the question and I believe that all of us face this kind of situation. “Pinal, In our database we have recently introduced ModifiedDate column in all of the tables. Now onwards any update happens in the row, we are updating current date and time to that field. Now here is the issue, when we added that field we did not update it with a default value because we were not sure when we will go live with the system so we let it be NULL. Now modification to the application went live yesterday and we are now updating this field. Here is where I need your help. We need to update all the tables in our database where we have column created ModifiedDate and now want to update with current datetime. As our system is already live since yesterday there are several thousands of the rows which are already updated with real world value so we do not want to update those values. Essentially, in our entire database where ever there is a ModifiedDate column and if it is NULL we want to update that with current date time?  Do you have a script for it?” Honestly I did not have such a script. This is very specific required but I was able to come up with two different methods how he can use this method. Method 1 : Using INFORMATION_SCHEMA SELECT 'UPDATE ' + T.TABLE_SCHEMA + '.' + T.TABLE_NAME + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM INFORMATION_SCHEMA.TABLES T INNER JOIN INFORMATION_SCHEMA.COLUMNS C ON T.TABLE_NAME = C.TABLE_NAME AND c.COLUMN_NAME ='ModifiedDate' WHERE T.TABLE_TYPE = 'BASE TABLE' ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME; Method 2: Using DMV SELECT 'UPDATE ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name ='ModifiedDate' ORDER BY SCHEMA_NAME(t.schema_id), t.name; Above scripts will create an UPDATE script which will do the task which is asked. We can pretty much the update script to any other SELECT statement and retrieve any other data as well. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >