Search Results

Search found 2229 results on 90 pages for 'conditions'.

Page 47/90 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Is it usual if my employer asks me to get MCP certificates for higher salary?

    - by Vimvq1987
    I just got a salary negotiation this morning (I passed three interviews last 3 weeks), and it was like a game. I was stubborn with my expectation, or that number, or I leave. OK, to be honest, it's not about money, but I, a not-very-experienced developer, want to see how much the employer pays me, and it was fun. And at last, my employer gave me this: "OK, * $, but with two conditions, first, you get your spoken skill improved (English is not my native), and second, you got MCPs before the end of the year". He asked me to get 3 MCP certificates. The company will buy any books that necessary to the exam, but I must read them at free time, take and pass the exams . If I not get them, my employer will not kick me out, but, salary discussion will be harder, for me. I accepted that offer, I thought it's good enough. But I wonder, is it usual? If you're an employer, have you ever given that offer to a candidate? If you're an employee, have you ever got, or will you accept an offer like that?

    Read the article

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

  • More Denali Execution Plan Warning Goodies

    - by Dave Ballantyne
    In my last blog, I showed how the execution plan in denali has been enhanced by 2 new warnings ,conversion affecting cardinality and conversion affecting seek, which are shown when a data type conversion has happened either implicitly or explicitly. That is not all though, there is more .  Also added are two warnings when performance has been affected due to memory issues. Memory spills to tempdb are a costly operation and happen when SqlServer is under memory pressure and needs to free some up. For a long time you have been able to see these as warnings in a profiler trace as a sort or hash warning event,  but now they are included right in the execution plan.  Not only that but also you can see which operator caused the spill , not just which statement.  Pretty damn handy. Another cause of performance problems relating to memory are memory grant waits.  Here is an informative write up on them,  but simply speaking , SQLServer has to allocate a certain amount of memory for each statement. If it is unable to you get a “memory grant wait”.  Once again there are other methods of analyzing these,  but the plan now shows these too. Don't worry that’s not real production code There is one other new warning that is of interest to me, “Unmatched Indexes”.  Once I find out the conditions under which that fires ill blog about it.

    Read the article

  • Why is rvalue write in shared memory array serialised?

    - by CJM
    I'm using CUDA 4.0 on a GPU with computing capability 2.1. One of my device functions is the following: device void test(int n, int* itemp) // itemp is shared memory pointer { const int tid = threadIdx.x; const int bdim = blockDim.x; int i, j, k; bool flag = 0; itemp[tid] = 0; for(i=tid; i<n; i+=bdim) { // { code that produces some values of "flag" } } itemp[tid] = flag; } Each thread is checking some conditions and producing a 0/1 flag. Then each thread is writing flag at the tid-th location of a shared int array. The write statement "itemp[tid] = flag;" gets serialized -- though "itemp[tid] = 0;" is not. This is causing huge performance lag which technically should not be there -- I want to avoid it. Please help.

    Read the article

  • How can I get SLI working with 295.40?

    - by Steve
    I've been doing a lot of googling these last few hours and I'm not having much luck. Perhaps I don't know exactly what I am looking for. I just recently installed Ubuntu 12.04LTS x86_64. Looks beautiful! I have two GTX470's in SLI, and I am finally migrating my desktop over given the hopeful gaming support as of late. My laptop has been enjoying multiple distros of Ubuntu for a couple years now. However, new problems come with unexplored territory, here. At first, I only had one working monitor of my two. Over on nvidia-xconfig I fixed that, but the only solution that actually worked was twinview. Just recently I read here that twinview is not compatible with SLI. Sweet. When I try to tell it, oh hey, use a separate XScreen, configure it the way I want it, click save to configuration file, enter my password, then a sudo restart lightdm, it's broken. One screen blacks or whites out (Couldn't tell you the specific conditions for each, I'm dubious at this point,) and I get this huge error dialogue box upon login. Something about incompatible resolutions if I remember right. Though I am sure I set the resolutions for each screen correctly. Anyway, when I try to enable SLI (sudo nvidia-xconfig --sli=On) despite the fact it hates twinview, unity breaks. The sidebar is there, but only one screen works, the mouse is trapped running along the left edge of it, and the background of the sidebar is a solid blue. Anyway, this ended up being entirely too verbose, I'm sorry, but could anyone part some wisdom please? It would be appreciated!

    Read the article

  • Tail-recursive implementation of take-while

    - by Giorgio
    I am trying to write a tail-recursive implementation of the function take-while in Scheme (but this exercise can be done in another language as well). My first attempt was (define (take-while p xs) (if (or (null? xs) (not (p (car xs)))) '() (cons (car xs) (take-while p (cdr xs))))) which works correctly but is not tail-recursive. My next attempt was (define (take-while-tr p xs) (let loop ((acc '()) (ys xs)) (if (or (null? ys) (not (p (car ys)))) (reverse acc) (loop (cons (car ys) acc) (cdr ys))))) which is tail recursive but needs a call to reverse as a last step in order to return the result list in the proper order. I cannot come up with a solution that is tail-recursive, does not use reverse, only uses lists as data structure (using a functional data structure like a Haskell's sequence which allows to append elements is not an option), has complexity linear in the size of the prefix, or at least does not have quadratic complexity (thanks to delnan for pointing this out). Is there an alternative solution satisfying all the properties above? My intuition tells me that it is impossible to accumulate the prefix of a list in a tail-recursive fashion while maintaining the original order between the elements (i.e. without the need of using reverse to adjust the result) but I am not able to prove this. Note The solution using reverse satisfies conditions 1, 3, 4.

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • Web hosting company basically forces me to use their domain name [closed]

    - by Jinx
    I've recently stumbled upon an unusual problem with one of hosting companies called giga-international.com. Anyway, I've ordered com.hr domain from Croatian domain name registration company, and my client insisted on using this host provider as couple of his friends already are hosted with them. I thought something was fishy when the first result on Google for Giga International was this little forum rant instead of their webpage. When I was checking their services they listed many features etc... space available, bandwidth etc. I just wanted to check how much ram do I get for my PHP scripts so I emailed them, and they told me that was company secret. Seriously? Anyway, since my client still insisted on hosting with them I've bought their Webspace package. During registration I had to choose free domain name because I couldn't advance registration without it. Nowhere was said, not even in general terms and conditions that I wouldn't be able to change that domain name. At least not for double the price of domain name per year. They said I can either move my domain name over to them (and pay them domain registration), or pay them 1 Euro per month for managing a DNS entry. On any previous hosting solution I was able to manage my domain names just by pointing my domain to their name servers, and this is something completely new and absurd for me. They also said that usual approach is not possible because of security and hardware limitations. I'd like to know what you guys think about this case, and should I report, and where should I report this case. In short. They forced me to register free domain name which doesn't suit my needs in order to register for their webspace package, and refuse to change domain name for my account until I either transfer domain to them or pay them DNS management which costs double the price of the domain name per year.

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • Updated Business Activity Monitoring (BAM) Class

    - by Gary Barg
    We have just completed an extensive upgrade to the Business Activity Monitoring course, bringing it up to PS5 level and doing some major rework of content and topic flow. This should be a GREAT course for anyone needing to learn to use BAM effectively to analyze their SOA data. Details of the Course This course explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time. Learn To: Create dashboards and alerts using a business-friendly, wizard-based design environment Monitor BPM and BPEL processes Configure drilling, driving, and time-based filtering Create alerts Build applications with a dynamic user interface Manage BAM users and roles In addition to learning Oracle BAM architecture, you learn how to perform administrative tasks related to Oracle BAM. You create and work with the different types of message sources that send data into Oracle BAM. You build interactive, real-time, actionable dashboards, and you configure alerts on abnormal conditions. You learn how to monitor both BPEL and BPM composite applications with Oracle BAM. Lastly, you create and use Oracle BAM data control to build applications with a dynamic user interface that changes based on real-time business events. Registration The Oracle University course page with more course details and registration information, is here. The next scheduled class: Date: 5-Dec-2012 Duration: 3 days Hours: 9:00 AM – 5:00 PM CT Location: Chicago, IL Class ID: 3325708

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • Independent HTML5 Physics Game: Any Feedback? [closed]

    - by mndoftea
    I've been independently developing a physics-based HTML5 game. I haven't used any libraries or engines; all the code, including the physics, is my own. It is free for a while on the Chrome Web Store and I was hoping that I could get some feedback on it. You can get it for Chrome here: https://chrome.google.com/webstore/detail/dbnmkpcomailjochphnmfklofkmgenci. I know this is not a normal question, but I'm happy for answers to be abstracted/generalized for broader use. Im asking here because I don't know anyone else personally who does this stuff. Any thoughts, comments or ideas you might have would be greatly appreciated! The physics system is written in JavaScript and works by setting up the differential equations of motion (plus a few conditions) and evaluating them numerically using the Euler method. The graphics are done through the HTML5 canvas and the music is done through the audio element. (Said music is in the public domain by the way). You can see the code by going to VIewView Source in Chrome.

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Quit job for another but current employer doesn't want to lose me. Would it be a bad idea to stay?

    - by Confused
    So I've handed in my notice at my current job as I've been offered a job at another company. However, my current employer doesn't want to lose me and they want to know what I want to stay. I mostly enjoy working there so I'd be open to negiotiation. The new job was an unexpected opportunity that presented itself. Such things I'd be looking for are: Better computers for developers Opportunity to work from home occasionally Improved internet access (e.g. able to download software, no keyword blocking) Chance to work on other technologies than my primary (we do have projects on other technologies) Pay increase (though this isn't my primary motivation) I found out that some of these were already in progress when I handed in my notice :( Is it ever a good idea to remain at a company after you've resigned? What if they meet all my conditions and alter my contract accordingly? Will I burn my bridges at the new company (I've already told them I'd accept their offer)? Update: Thanks for the answers. Quite a mixed bag which was interesting. Anyway, just so you know, I've chosen to stay at my current company. So far, it definately feels like the right decision. Guess I won't know for a few months whether is was though.

    Read the article

  • SQL 2008/2005 Hosting :: Error - “Named Pipes Provider, error: 40 – Could not open a connection to SQL Server”

    - by mbridge
    When setting up a Microsoft Windows Server 2008 system, I went through the motions to set up IIS, MS SQL Server 2008, and Visual Studio 2010 to use as a test-bed. One of the immediate benefits of setting up such a system is that most development can be done remotely: MS SQL Server Management Studio, Visual Studio’s Web development suite, as well as file shares, remote desktop, etc, make for a great way to remotely develop in ‘pristine’ conditions. But there are drawbacks, too, such as needing to deal with firewall issues, not being able to penetrate past a router or the requirement of setting up a VPN. One of the problems I encountered when trying to remote into the MS SQL Server 2008 that I’d set up was the following error: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server I followed the below steps, and was able to connect to the server after just a few moments of tinkering: 1. From the server in question, surf to this Microsoft article, and download and install the Firewall rules modification program. Never drop your firewall, even on a development machine, unless you have a really good reason to. 2. Launch SQL Server Configuration Manager. Navigate to SQL Server Network Configuration, then Protocols for your server name. Enable TCP/IP and Named Pipes by right-clicking and choosing Enable for each given Protocol Name. 3. Restart the SQL Server service from Services (or from command line, subsequently run “net stop mssqlserver” then “net start mssqlserver”. 4. Try your remote connection once more, and you should be able to connect. It’s not a terribly difficult concept, but one of the more challenging tasks developers face is dealing with environment setup. And while there is a certain blurred-line overlap between software development and server administration, sometimes the latter is daunting, especially given that you might set up only a handful of servers during your career.

    Read the article

  • mod_rewrite not working after upgrade to 12.10

    - by CrowderSoup
    I'm hoping this is a quick and simple fix and that I just need a fresh set of eyes. However, I'm fearful that it might actually be an error in the latest build of the rewrite module. I have a .htaccess file that turns on the rewrite engine (I've made sure the module is enabled), creates some rewrite conditions, and finally a rewrite rule. Here's my .htaccess file for reference: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?request=$1 [L,QSA,NC] </IfModule> Now for the problem: if I go to hostname.com it works fine. If I go to hostname.com/Index it works fine. However, if I go to hostname.com/index it doesn't rewrite the request and I get a 404. I'm not sure what's going on here. I've used a rewrite rule tester and there doesn't appear to be any issues with my rewrite rule itself. Again, this issue didn't manifest until after I upgraded to 12.10, at which point I know that Apache was updated. Any thoughts? Has anyone else here experienced this? I know that two other people besides myself have experienced this here. Thanks in advance for any help you can provide!

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • Include weather information in ASP.Net site from weather.com services

    - by sreejukg
    In this article, I am going to demonstrate how you can use the XMLOAP services (referred as XOAP from here onwards) provided by weather.com to display the weather information in your website. The XOAP services are available to be used for free of charge, provided you are comply with requirements from weather.com. I am writing this article from a technical point of view. If you are planning to use weather.com XOAP services in your application, please refer to the terms and conditions from weather.com website. In order to start using the XOAP services, you need to sign up the XOAP datafeed. The signing process is simple, you simply browse the url http://www.weather.com/services/xmloap.html. The URL looks similar to the following. Click on the sign up button, you will reach the registration page. Here you need to specify the site name you need to use this feed for. The form looks similar to the following. Once you fill all the mandatory information, click on save and continue button. That’s it. The registration is over. You will receive an email that contains your partner id, license key and SDK. The SDK available in a zipped format, contains the terms of use and documentation about the services available. Other than this the SDK includes the logos and icons required to display the weather information. As per the SDK, currently there are 2 types of information available through XOAP. These services are Current Conditions for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Hourly Five-Day Forecast (today + 4 additional forecast days in consecutive order beginning with tomorrow) for over 30,000 U.S. and over 7,900 international Location IDs Updated at least Three Times Daily The SDK provides detailed information about the fields included in response of each service. Additionally there is a refresh rate that you need to comply with. As per the SDK, the refresh rate means the following “Refresh Rate” shall mean the maximum frequency with which you may call the XML Feed for a given LocID requesting a data set for that LocID. During the time period in between refresh periods the data must be cached by you either in the memory on your servers or in Your Desktop Application. About the Services Weather.com will provide you with access to the XML Feed over the Internet through the hostname xoap.weather.com. The weather data from the XML feed must be requested for a specific location. So you need a location ID (LOC ID). The XML feed work with 2 types of location IDs. First one is with City Identifiers and second one is using 5 Digit US postal codes. If you do not know your location ID, don’t worry, there is a location id search service available for you to retrieve the location id from city name. Since I am a resident in the Kingdom of Bahrain, I am going to retrieve the weather information for Manama(the capital of Bahrain) . In order to get the location ID for Manama, type the following URL in your address bar. http://xoap.weather.com/search/search?where=manama I got the following XML output. <?xml version="1.0" encoding="UTF-8"?> <!-- This document is intended only for use by authorized licensees of The –> <!-- Weather Channel. Unauthorized use is prohibited. Copyright 1995-2011, –> <!-- The Weather Channel Interactive, Inc. All Rights Reserved. –> <search ver="3.0">       <loc id="BAXX0001" type="1">Al Manama, Bahrain</loc> </search> You can try this with any city name, if the city is available, it will return the location id, and otherwise, it will return nothing. In order to get the weather information, from XOAP,  you need to pass certain parameters to the XOAP service. A brief about the parameters are as follows. Please refer SDK for more details. Parameter name Possible Value cc Optional, if you include this, the current condition will be returned. Value can be anything, as it will be ignored e.g. cc=* dayf If you want the forecast for 5 days, specify dayf=5 This is optional iink Value should be XOAP par Your partner id. You can find this in your registration email from weather.com prod Value should be XOAP key The license key assigned to you. This will be available in the registration email unit s or m (standard or matric or you can think of Celsius/Fahrenheit) this is optional field, if not specified the unit will be standard(s) The URL host for the XOAP service is http://xoap.weather.com. So for my purpose, I need the following request to be made to access the XOAP services. http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=*********&key=************** (The ***** to be replaced with the corresponding alternatives) The response XML have a root element “weather”. Under the root element, it has the following sections <head> - the meta data information about the weather results returned. <loc> - the location data block that provides, the information about the location for which the wheather data is retrieved. <lnks> - the 4 promotional links you need to place along with the weather display. Additional to these 4 links, there should be another link with weather channel logo to the home page of weather.com. <cc> - the current condition data. This element will be there only if you specify the cc element in the request. <dayf> - the forcast data as you specified. This element will be there only if you specify the dayf in the request. In this walkthrough, I am going to capture the weather information for Manama (Location ID: BAXX0001). You need 2 applications to display weather information in your website. A Console application that retrieves data from the XMLOAP and store in the SQL Server database (or any data store as you prefer).This application will be scheduled to execute in every 25 minutes using windows task scheduler, so that we can comply with the refresh rate. A web application that display data from the SQL Server database Retrieve the Weather from XOAP I have created a console application named, Weather Service. I created a SQL server database, with the following columns. I named the table as tblweather. You are free to choose any name. Column name Description lastUpdated Datetime, this is the last time when the weather data is updated. This is the time of the service running TemparatureDateTime The date and time returned by XML feed Temparature The temperature returned by the XML feed. TemparatureUnit The unit of the temperature returned by the XML feed iconId The id of the icon to be used. Currently 48 icons from 0 to 47 are available. WeatherDescription The Weather Description Phrase returned by the feed. Link1url The url to the first promo link Link1Text The text for the first promo link Link2url The url to the second promo link Link2Text The text for the second promo link Link3url The url to the third promo link Link3Text The text for the third promo link Link4url The url to the fourth promo link Link4Text The text for the fourth promo link Every time when the service runs, the application will update the database columns from the XOAP data feed. When the application starts, It is going to get the data as XML from the url. This demonstration uses LINQ to extract the necessary data from the fetched XML. The following are the code segment for extracting data from the weather XML using LINQ. // first, create an instance of the XDocument class with the XOAP URL. replace **** with the corresponding values. XDocument weather = XDocument.Load("http://xoap.weather.com/weather/local/BAXX0001?cc=*&link=xoap&prod=xoap&par=***********&key=c*********"); // construct a query using LINQ var feedResult = from item in weather.Descendants() select new { unit = item.Element("head").Element("ut").Value, temp = item.Element("cc").Element("tmp").Value, tempDate = item.Element("cc").Element("lsup").Value, iconId = item.Element("cc").Element("icon").Value, description = item.Element("cc").Element("t").Value, links = from link in item.Elements("lnks").Elements("link") select new { url = link.Element("l").Value, text = link.Element("t").Value } }; // Load the root node to a variable, you may use foreach construct instead. var item1 = feedResult.First(); *If you want to learn more about LINQ and XML, read this nice blog from Scott GU. http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx Now you have all the required values in item1. For e.g. if you want to get the temperature, use item1.temp; Now I just need to execute an SQL query against the database. See the connection part. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { string strSql = @"update tblweather set lastupdated=getdate(), temparatureDateTime = @temparatureDateTime, temparature=@temparature, temparatureUnit=@temparatureUnit, iconId = @iconId, description=@description, link1url=@link1url, link1text=@link1text, link2url=@link2url, link2text=@link2text,link3url=@link3url, link3text=@link3text,link4url=@link4url, link4text=@link4text"; SqlCommand comm = new SqlCommand(strSql, conn); comm.Parameters.AddWithValue("temparatureDateTime", item1.tempDate); comm.Parameters.AddWithValue("temparature", item1.temp); comm.Parameters.AddWithValue("temparatureUnit", item1.unit); comm.Parameters.AddWithValue("description", item1.description); comm.Parameters.AddWithValue("iconId", item1.iconId); var lstLinks = item1.links; comm.Parameters.AddWithValue("link1url", lstLinks.ElementAt(0).url); comm.Parameters.AddWithValue("link1text", lstLinks.ElementAt(0).text); comm.Parameters.AddWithValue("link2url", lstLinks.ElementAt(1).url); comm.Parameters.AddWithValue("link2text", lstLinks.ElementAt(1).text); comm.Parameters.AddWithValue("link3url", lstLinks.ElementAt(2).url); comm.Parameters.AddWithValue("link3text", lstLinks.ElementAt(2).text); comm.Parameters.AddWithValue("link4url", lstLinks.ElementAt(3).url); comm.Parameters.AddWithValue("link4text", lstLinks.ElementAt(3).text); conn.Open(); comm.ExecuteNonQuery(); conn.Close(); Console.WriteLine("database updated"); } Now click ctrl + f5 to run the service. I got the following output Check your database and make sure, the data is updated with the latest information from the service. (Make sure you inserted one row in the database by entering some values before executing the service. Otherwise you need to modify your application code to count the rows and conditionally perform insert/update query) Display the Weather information in ASP.Net page Now you got all the data in the database. You just need to create a web application and display the data from the database. I created a new ASP.Net web application with a default.aspx page. In order to comply with the terms of weather.com, You need to use Weather.com logo along with the weather display. You can find the necessary logos to use under the folder “logos” in the SDK. Additionally copy any of the icon set from the folder “icons” to your web application. I used the 93x93 icon set. You are free to use any other sizes available. The design view of the page in VS2010 looks similar to the following. The page contains a heading, an image control (for displaying the weather icon), 2 label controls (for displaying temperature and weather description), 4 hyperlinks (for displaying the 4 promo links returned by the XOAP service) and weather.com logo with hyperlink to the weather.com home page. I am going to write code that will update the values of these controls from the values stored in the database by the service application as mentioned in the previous step. Go to the code behind file for the webpage, enter the following code under Page_Load event handler. using (SqlConnection conn = new SqlConnection(@"Data Source=sreeju\sqlexpress;Initial Catalog=Sample;Integrated Security=True")) { SqlCommand comm = new SqlCommand("select top 1 * from tblweather", conn); conn.Open(); SqlDataReader reader = comm.ExecuteReader(); if (reader.Read()) { lblTemparature.Text = reader["temparature"].ToString() + "&deg;" + reader["temparatureUnit"].ToString(); lblWeatherDescription.Text = reader["description"].ToString(); imgWeather.ImageUrl = "icons/" + reader["iconId"].ToString() + ".png"; lnk1.Text = reader["link1text"].ToString(); lnk1.NavigateUrl = reader["link1url"].ToString(); lnk2.Text = reader["link2text"].ToString(); lnk2.NavigateUrl = reader["link2url"].ToString(); lnk3.Text = reader["link3text"].ToString(); lnk3.NavigateUrl = reader["link3url"].ToString(); lnk4.Text = reader["link4text"].ToString(); lnk4.NavigateUrl = reader["link4url"].ToString(); } conn.Close(); } Press ctrl + f5 to run the page. You will see the following output. That’s it. You need to configure the console application to run every 25 minutes so that the database is updated. Also you can fetch the forecast information and store those in the database, and then retrieve it later in your web page. Since the data resides in your database, you have the full control over your display. You need to make sure your website comply with weather.com license requirements. If you want to get the source code of this walkthrough through the application, post your email address below. Hope you enjoy the reading.

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to disable/enable network, switch to Wifi in Android emulator?

    - by medicdave
    I'm working on a Push Notifications library for Android (http://deaconproject.org/) that needs to take action if network connectivity is interrupted or changed - namely, it needs to re-initiate a server connection or pause its operation until network connectivity is available. This seems to work fine using and Android BroadcastReceiver for "android.net.ConnectivityManager.CONNECTIVITY_ACTION". My problem is in testing the library - I would like to automatically test the library's response to a broken network connection, or a transition from 3G to WiFi, under various configuration conditions. The problem is, I don't want to sit with the emulator and hit F8 all day. Is there a way to programmatically manipulate network connections on Android from within a jUnit test without resorting to toggling Airplane Mode? I've already tried issuing commands to the emulator via the console, manipulating the GSM mode, etc, but while the phone state changes on the display, the Internet connection remains up.

    Read the article

  • Software Engineering undergraduate project ideas

    - by Nasser Hajloo
    There was a similar post at << Computer science undergraduate project ideas << Ideas for Software Engineering Thesis Project << Senior computer engineering project ideas ? << Final Year Project(Software Engineering) Idea So I read all of them and my answer wasn't fit to those. Actually I'm looking for some ideas which 1 - Help me extend a functionality of Open source software (like creating a usefull add-in 2 - Let me Create a Scientific Paper (ideas to publish a scientific paper) 3 - Or Create a Unique an usefull application from the scratch , (like performance tool, profiler, analyzers and other similar tools) I know C# - Asp.net and sql So with all these conditions what do you think is better to do? let me know your ideas whatever those are. any idea appriciated.

    Read the article

  • Help Understanding Enumerable.Join Method

    - by lush
    Yesterday I posted this question regarding using lambdas inside of a Join() method to check if 2 conditions exist across 2 entities. I received an answer on the question, which worked perfectly. I thought after reading the MSDN article on the Enumerable.Join() method, I'd understand exactly what was happening, but I don't. Could someone help me understand what's going on in the below code (the Join() method specifically)? Thanks in advance. if (db.TableA.Where( a => a.UserID == currentUser ) .Join( db.TableB.Where( b => b.MyField == someValue ), o => o.someFieldID, i => i.someFieldID, (o,i) => o ) .Any()) { //... }

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >