Search Results

Search found 5312 results on 213 pages for 'hand e food'.

Page 92/213 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Ubuntu 12.04 LTS Wireless Asus USB-N53 (rt3572sta) driver installation issue

    - by Jake Thompson
    My purchase of the Asus USB-N53 just came in today and I spent several hours Googling and researching drivers for this device. When I first plugged the device in it connected fine to my open system, WEP, DHCP configured access point. I opened Google Chrome and a few pages loaded, everything seemed fine. 30 seconds later... Boom! It disconnected and showed attempts to reconnect and asked for the WEP key and just showed a state of infinite connection time until it asked me for the password again. I'm using amd64 (64 bit Ubuntu desktop 12.04 LTS) The official driver can be found here although I had no luck with it. lsusb: Bus 003 Device 002: ID 0b05:179d ASUSTek Computer, Inc. uname -a Linux Jake 3.2.0-31-generic #50-Ubuntu SMP Fri Sep 7 16:16:45 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------------------------------------------------Solved: I must of done something wrong when I originally installed the latest drivers from the chipset manufacturers website. I tried reinstalling and did modprobe rt3572sta and waited maybe 10 minutes??? and I connected then I rebooted and everything seems to be working so far. What I did do before hand is unplug the device and typed into the terminal (once for every source I attempted to install): cd '<directory of the driver source>' make uninstall make clean Then I went into the 2.5.0.0 directory and installed that with make make install Then I typed modprobe rt3572sta This was all in superuser. For those who don't know: sudo su

    Read the article

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • SmartAssembly Support: How to change the maps folder

    - by Bart Read
    If you've set up SmartAssembly to store error reports in a SQL Server database, you'll also have specified a folder for the map files that are used to de-obfuscate error reports (see Figure 1). Whilst you can change the database easily enough you can't change the map folder path via the UI - if you click on it, it'll just open the folder in Explorer - but never fear, you can change it manually and fortunately it's not that difficult. (If you want to get to these settings click the Tools > Options link on the left-hand side of the SmartAssembly main window.)   Figure 1. Error reports database settings in SmartAssembly. The folder path is actually stored in the database, so you just need to open up SQL Server Management Studio, connect to the SQL Server where your error reports database is stored, then open a new query on the SmartAssembly database by right-clicking on it in the Object Explorer, then clicking New Query (see figure 2).     Figure 2. Opening a new query against the SmartAssembly error reports database in SQL Server. Now execute the following SQL query in the new query window: SELECT * FROM dbo.Information You should find that you get a result set rather like that shown in figure 3. You can see that the map folder path is stored in the MapFolderNetworkPath column.   Figure 3. Contents of the dbo.Information table, showing the map folder path I set in SmartAssembly. All I need to do to change this is execute the following SQL: UPDATE dbo.Information SET MapFolderNetworkPath = '\\UNCPATHTONEWFOLDER' WHERE MapFolderNetworkPath = '\\dev-ltbart\SAMaps' This will change the map folder path to whatever I supply in the SET clause. Once you've done this, you can verify the change by executing the following again: SELECT * FROM dbo.Information You should find the result set contains the new path you've set.

    Read the article

  • How to enable Google Drive offline access

    - by Gopinath
    Google’s latest cloud offering Google Drive provides 5GB of free storage to let you store documents, spread sheets, photos and other stuff and access them using a variety of devices – PCs, Macs, smartphones and tablets. You can also set up offline access to Google Drive so that you can access files on the move even if you don’t have access to internet connection. To access Google Drive offline you need Chrome browser and here are the simple steps to be followed for setting up. Step 1:  Login to Google Drive and click the gear icon in the upper right of your window. Step 2: Select Set up Docs offline from the drop-down menu. The “Set up offline viewing of Google Docs” dialog will appear Step 3:  Authorize Google Chrome to store your Google Drive content by clicking on “Allow offline docs” and then install “Docs Chrome web app” by clicking on “Install from Chrome web store”. You’ll be taken to the Chrome web store, where you’ll need to click Install on the right-hand side of the browser window. Step 4: Once the app is installed, you’ll be taken to a Chrome page with the Google Docs app icon. Click the icon to go back to your Documents List. Google Chrome take few minutes to prepare Google Drive for offline access by downloading all the files to your local computer. Once it’s completed, you can access Google Drive files offline. To access files of Google Drive offline point your Chrome browser to drive.google.com. When offline your Google Docs stored on Google Drive are available in view only mode. You can open Google Documents, Spread sheets & Presentations and see the content but you can’t edit them.

    Read the article

  • What markup languages are good for programming articles/tutorials?

    - by Vilx-
    I very much wish to write a programming tutorial in my native language (Latvian). There are far too few of those. I am however unsure on what markup language to use for writing it. Here are a few things I would like to achieve: The same source can be compiled to both HTML for online viewing and printed form (PDF?). In HTML form it would allow superior interaction and appearance (see below), while the print form would look good on paper (layout etc). I have the idea that the tutorial could be multi-language. Different students have different requirements in their schools. For example, some schools teach Java, some teach C#. You could choose the language on the top of the HTML page and the relevant code snippets (and occasionally pieces of text) would swap out. Most of the text is the same anyway, only the language syntax is a bit different. The text would occasionally contain images too of course and these would need to be included in both the HTML and the printed version In the HTML version the code snippets should get automatic syntax coloring which should ideally be the same as in the recommended IDE for the tutorial. In case there are ambiguities, hints for the syntax colorer should be possible, but I don't want to do the whole coloring by hand. "Output" syntax coloring which would emulate a standard 80x25 text console (since many of the initial programs would be console applicatioins) Collapsible sections for answers to questions (aka "spoiler tags") Automatically generated index/table-of-contents Links to other parts of the tutorial (rendered as links in HTML and as references in print version) "Side note" sections, rendered as separate blocks on the side. Other functions useful in publications that I'm not aware of :) I know this is a bit much to ask, but is there something close enough that I could take it as a starting point and add the necessary features myself? Or is there something in the whole list (like the desire to have both HTML and print versions from the same source) that makes it all fundametally infeasible?

    Read the article

  • Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome

    - by The Geek
    Ever tried to figure out exactly how much memory Google Chrome or Internet Explorer is using? Since they each show up a bunch of times in Task Manager, it’s not so easy! Here’s the quick and easy way to compare them. Both Chrome and IE use multiple processes to isolate tabs from each other, to make sure that one tab doesn’t kill the whole browser. Firefox, on the other hand, just uses a single process for everything. Rather than pulling out a calculator and adding them all up, you can just open up Google Chrome, and type in about:memory into the location bar to see a full list of each browser’s memory usage.   On my test system with 6 GB of system RAM, I’m running the Development channel version of Chrome, and I’ve got about 40 different tabs open, which is why the memory usage is so high. Firefox has 8 tabs open, and IE is enjoying being opened for the first time in forever. Want to help cut down on memory usage and keep your Chrome browser running fast? Disable all unnecessary extensions, and then make sure you disable any plug-ins that you don’t need either. Similar Articles Productive Geek Tips Stupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxStupid Geek Tricks: Shrink the XP Volume ControlStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Fix for Firefox memory leak on WindowsHow to Purge Memory in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • Does software architect/designer require more skills and intellectual than software engineer (implementation)?

    - by Amumu
    So I heard the positions for designing software and writing spec for developers to implement are higher and getting paid more. I think many companies are using the Software Engineering title to depict the person to implement software, which means using tools and technologies to write the actual code. I know that in order to be a software architecture, one needs to be good at implementation in order to have an architectural overview of a system using a set of specific technologies. This is different than I thought of a Software Engineer. My thinking is similar to the standard of IEEE: A software engineer is an engineer who is capable of going from requirement analysis until the software is deployed, based on the SWEBOK (IEEE). Just look at the table of content. The IEEE even has the certificate for Software Engineering, since ABET (Accreditation Board for Engineering and Technology) seems to not have an official qualification test for Software Engineer (although IEEE is a member of ABET). The two certificates are CSDA and CSDP. I intend to take on these two examination in the future to be qualified as a software engineer, although I am already working as one (Junior position). On a side note on the issues of Software Engineer, you can read the dicussion here: Just a Programmer and Just a Software Engineer. The information of ABET does not accredit Software Engineer is in "Just a Software Engineer". On the other hand, why is Programmer/Softwar Engineer who writes code considered a low level position? Suppose if two people have equal skills after the same years of experience, one becomes a software architect and one keeps focus on implementation aspect of Software Engineering (of course he also has design skill to compose a system, since he's a software engineer as well, but maybe less than the specialized software architect), how comes work from Software Engineer is less complicated than the Software Architect? In order to write great code with turn design into reality, it requires far greater skill than just understanding a particular language and a framework. I don't think the ones who wrote and contributing Linux OS are lower level job and easier than conceptual design and writing spec. Can someone enlighten me?

    Read the article

  • Little PM side post...

    - by edgaralgernon
    When adding new team memebers... off set the ramp up time by 1) having pre built machines ready and and easy method of getting the lastest tools, code base etc. I'm fortunate enough to be at a client that has a machine ready built and loaded when the dev arrives, all they have to do is grab the code. 2) have tasks broken down so that dependencies are as minimal as possible. In other words, to over come the mythical man month issue (as recently mentioned on slashdot) make sure the tasks you hand out have few dependencies on each other. That way the new dev is able to be productive fairly quickly. Here's our historical lead time... the bump in Jan is due to added work, by 2/18 we had added 4 new people over the last two weeks. And amazing the time starts coming down: Here's our averag work time: again time ramps up as we are adding more tasks, but then starts inching back down through out Feb and March. It's not that we beat the Mythical Man Month, and in fact I still believe the book and idea are highly relevant. But if you can break the tasks down and reduce the dependencies between the task then you can mitigate the effect. The tool used in this case is from AgileZen.com and some of the wild swings are due to inexperience with the system initially... but our average times as measured by the tool are matching real life. Also the tool appearst to measure in 24 hour days and 7 day weeks. so it isn't as bad as it looks. :-)

    Read the article

  • Pet Store Loyalty Programs: I'm Not Loyal Yet!

    - by ruth.donohue
    After two years of constantly being asked (aka "pestered) by my now eight-year-old daughter for a dog (or any pet that is more interactive than a goldfish), I've finally compromised with a hamster purely by chance. Friends of ours had recently brought home a female hamster, and (surprise, surprise) two weeks later, they were looking for homes for 11 baby hamster pups. Since the pups were not yet ready to be weaned from their mother, my daughter and I had several weeks to get ready -- and we spent that extra time visiting a number of local pet stores and purchasing an assortment of hamster books, toys, exercise equipment, food, bedding, and cage -- not cheap! Now, I'm usually an online shopper (i.e. I love reading user reviews and comparing prices), but for kids, there is absolutely no online substitute for actually walking into a store and physically picking out something you want. We have two competing pet shops within close proximity to where we live, and I signed up for their rewards programs to get discounts on select items. I'm sure it takes a while to get my data into the system (after all, I did fill out a form the old-fashioned way), but as it has been more than two weeks for one store and over a week for the other, the window of opportunity is getting smaller as we by now pretty much have most of what we think we need. Everything I've purchased has been purely hamster or small animal related, so in an ideal world, the stores would have me easily figured out as a hamster owner. Here is what I would be expecting of a loyalty rewards program: Point me to some useful links, either information provied by the company or external websites where I can learn more. Any value-add a business can provide to make my life easier makes me a much more loyal customer. What things can I expect as a new pet owner? Any hamster communities? Any hamster-related events? Any vets that specialize in small animals in the vicinity? Send me an email with other related products I may be interested in. Upsell and cross-sell to me. We've go the basics and a couple of luxuries, but at this point, I'm pretty excited (surprisingly) about the hamster, and my daughter is footing the bill with her birthday and Christmas money. She and I would be more than happy to spend her money! Get this information to me faster. As I mentioned, my window of opportunity is getting smaller, as eithe rmy daughter's money will run out on other things or we'll start losing the thrill of buying new hamster toys and treats. I realize this is easier said than done, and undoubtedly, the stores are getting value knowing my basic customer information and purchase history. Buth, they could really benefit by delivering a loyalty program that really earned my loyalty. "Goldeen" needs a new water bottle, yogurt chips, and chew toys as he doesn't seem to like the ones we bought. So for now, I'll just go to whichever store is the most convenient. Oh, and just for fun (not related to this post), here are a couple of videos my daughter really got a kick out of watching: Hamster on a Piano Tic in a Spin-Dryer

    Read the article

  • The “Customer” Experience Revolution is Here

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Anthony Lye, SVP, Oracle Development The Experience Revolution is here, and we are going to explore and celebrate our new customer experience ventures and strategy in an extraordinary way. In true Oracle fashion, we are hosting an exceptional event, bringing together customer experience advocates, visionaries and practitioners to discover and define Oracle’s Customer Experience vision. The Experience Revolution is best described as today’s era of the empowered consumer. For those of us who work with customers on a daily basis, we know that the modern consumer demands fast, accurate, consistent information across all communication channels. And if they don’t like the services received can easily take to social channels to voice disapproval. For this reason, organizations today operate in an environment where traditional methods of differentiation are less effective and customer experience has become the primary driver of business value. Here’s some food for thought, according to the 2011 Customer Experience Impact (CEI) Report, a full 89 percent of consumers will switch brands for a better customer experience. In short, in today’s era of the empowered consumer, delivering excellent customer experiences is what will, and is, defining the next great brands. At The Experience Revolution, Oracle President Mark Hurd will detail the vision of where customer experience is going and how Oracle will help you get there. He will introduce for the first time Oracle Customer Experience, a cross stack suite of customer experience products that enable organizations to: Engage customers with a consistent, connected and personalized brand experience across all channels and devices Deliver exceptional cross-channel order fulfillment and customer service through web, call centers and social networks Connect and analyze data from all interactions to better personalize experiences and identify hidden opportunities The Experience Revolution will also include an interactive gallery of customer experience interactions, featuring videos, touch screens and near field communication technology that will guide each attendee through an individualized event experience. We hope you will join us for an incredible evening on June 25, from 6:00 – 9:00 p.m. at Gotham Hall in New York City. You can register for The Experience Revolution here. And if you haven’t already joined the conversation on Twitter, please do: #OracleCX, #ExperienceRevolution

    Read the article

  • What are the best ways to cope with «one of those days»? [closed]

    - by Júlio Santos
    I work in a fast-paced startup and am absolutely in love with what I do. Still, I wake up to a bad mood as often as the next guy. I find that forcing myself to play out my day as usual doesn't help — in fact, it only makes it worse, possibly ruining my productivity for the rest of the week. There are several ways I can cope with this, for instance: dropping the current task for the day and getting that awesome but low-priority feature in place; doing some pending research for future development (i.e. digging up ruby gems); spending the day reading and educating myself; just taking the day off. The first three items are productive in themselves, and taking the day off recharges my coding mana for the rest of the week. Being a young developer, I'm pretty sure there's a multitude of alternatives that I haven't come across yet. How can programmers cope with off days? Edit: I am looking for answers related specifically to this profession. I therefore believe that coping with off days in our field is fundamentally different that doing so in other areas. Programmers (especially in a start-up) are a unique breed in this context in the sense that they tend to have a multitude of tasks at hand on any given moment, so they can easily switch between these without wreaking too much havoc. Programmers also tend to work based on clear, concise objectives — provided they are well managed either by themselves or a third party — and hence have a great deal of flexibility when it comes to managing their time. Finally, our line of work creates the opportunity — necessity, if you will — to fit a plethora of tasks not directly related to the current one, such as research and staying on top of new releases and software updates.

    Read the article

  • AIIM Best Practice Awards to Two Oracle Customers

    - by [email protected]
    On Tuesday night at the AIIM Awards Banquet, two Oracle customers and their implementation partners won awards for their Oracle Enterprise 2.0 implementations. The Bureau of Indian Affairs, a division of the Department of Interior, won a Carl E. Nelson Best Practices Award for their implementation of Oracle WebCenter and Oracle Content Management to provide an interactive social media environment to engage and inform their constituent communities. The BIA Citizen Portal provides all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. This integration was achieved with the support of Oracle partner Mythics. The Charles Town Police Department integrated Oracle Content Management to integrate with and support their police evidence system. This integration was created in partnership with Oracle partner EDAC Systems Inc. Diane Hoppe of EDAC Systems Inc. was on hand to receive the award for Charles Town Police Department. You can see pictures of our award winners here: Linus Chow, Oracle; John Mancini, President of AIIM; and Diane Hoppe, EDACS - Charles Town Police: John Mancini, President of AIIM; Linus Chow, Oracle; Chris Baker, Mythics; and Bureau of Indian Affairs Oracle, EDACS, Mythics, BIA You can read more in the AIIM press release.

    Read the article

  • My Optimized Adam &amp; Eve

    - by MarkPearl
    Today I had a few minutes in the evening to go over my original Adam and Eve code… what I wanted to see tonight was if I could optimize the code any further… which I was pretty sure could be done. Ultimately what I wanted to find from the experiment was a balance between optimized code an reusable code. On the one hand I can put everything into a single function and end up with a totally unusable function that is extremely compressed, which would have big comebacks when making modifications at a later stage. Alternatively I could have many single line functions that are extremely loosely coupled but sparsely spaced and so would almost be to fragmented to grok. Ultimately I found with my current iteration something that I consider readable, yet compressed. Code below… // Learn more about F# at http://fsharp.net open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // Prints the details // let showDetails(person : string * (string * string) option) = let ParentsName = let parents = snd(person) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" let result = fst(person) + Environment.NewLine + ParentsName result // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = // Try and find a match of the name let o = Seq.tryFind(fun person -> match name with | firstName when firstName = fst(person) -> true | _ -> false) people // Show the details based on the match result match o with | Option.Some(x) -> showDetails(Option.get(o)) | _ -> "Not Found" Console.WriteLine(findPerson("Cains", people)) Console.ReadLine()

    Read the article

  • Page debugging got easier in UCM 11g

    - by kyle.hatlestad
    UCM is famous for it's extra parameters you can add to the URL to do different things. You can add &IsJava=1 to get all of the local data and result set information that comes back from the idc_service. You can add &IsSoap=1 and get back a SOAP message with that information. Or &IsJson=1 will send it in JSON format. There are ones that change the display like &coreContentOnly=1 which will hide the footer and navigation on the page. In 10g, you could add &ScriptDebugTrace=1 and it would display the list of resources that were called through includes or eval functions at the bottom of the page. And it would list them in nested order so you could see the order in which they were called and which components overrode each other. But in 11g, that parameter flag no longer works. Instead, you get a much more powerful one called &IsPageDebug=1. When you add that to a page, you get a small gray tab at the bottom right-hand part of the browser window. When you click it, it will expand and let you choose several pieces of information to display. You can select 'idocscript trace' and display the nested includes you used to get with ScriptDebugTrace. You can select 'initial binder' and see the local data and result sets coming back from the service, just as you would with IsJava. But in this display, it formats the results in easy to read tables (instead of raw HDA format). Then you can get the final binder which would contain all of the local data and result sets after executing all of the includes for the display of the page (and not just from the Service call). And then a 'javascript log' for reporting on the javascript functions and times being executed on the page. Together, these new data displays make page debugging much easier in 11g. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Collision detection with multiple polygons simultaneously

    - by Craig Innes
    I've written a collision system which detects/resolves collisions between a rectangular player and a convex polygon world using the Separating Axis Theorem. This scheme works fine when the player is colliding with a single polygon, but when I try to create a level made up of combinations of these shapes, the player gets "stuck" between shapes when trying to move from one polygon to the other. The reason for this seems to be that collisions are detected after the player has been pushed through the shape by its movement or gravity. When the system resolves the collision, it resolves them in an order that doesn't make sense (for example, when the player is moving from one flat rectangle to another, gravity pushes them below the ground, but the collision with the left hand side of the second block is resolved before the collision with the top of the block, meaning the player is pushed back left before being pushed back up). Other similar posts have resolved this problem by having a strict rule on which axes to resolve first. For example, always resolve the collision on the y axis, then if the object is still colliding with things, resolve on the x axis. This solution only works in the case of a completely axis oriented box world, and doesn't solve the problem if the player is stuck moving along a series of angled shapes or sliding down a wall. Does any one have any ideas of how I could alter my collision system to prevent these situations from happening?

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • Professional Windows Phone 7 Game Development: Creating Games using XNA Game Studio 4

    - by Chris Williams
    In 24 short days*, my (along with the awesome George W. Clingerman) first book will be released:   Professional Windows Phone 7 Game Development: Creating Games using XNA Game Studio 4 (or as we like to call it, that damned 550 page monstrosity that nearly killed us) Weighing in at 552 pages and featuring a foreward by the legendary James Silva (Ska Studios, creator of The Dishwasher: Dead Samurai, The Dishwasher: Vampire Smile, I MAED A GAME W1TH Z0MB1ES 1NIT!!!1, and more...) this book gives thorough coverage of XNA 4.0 as it relates to Windows Phone 7. The book is written in a light, conversational tone, which means (unlike some books) you won't be compelled to gouge your eyes out with a rusty spork after reading the first few pages. At least, that’s the intent. If you do feel compelled to engage in some feats of eye-gouging sporkage, we (the authors of this book) would like to point out that we are not responsible and that seeking the help of a mental health professional might be advised. (We’re not qualified to dispense medical advice either.) The book is structured to introduce relevant material first, with code snippets and samples of how to use various phone features and XNA concepts, with helpful side notes along the way. After you've been exposed to a few chapters worth of concepts, you get the chance to bring them together by building a game that leverages those features. This book contains THREE (3!) complete games, including: Drive & Dodge (a racing game), Poker Dice (roll dice to make poker hand combinations) and Picture Puzzle (take a photo and turn it into a jigsaw puzzle.) Writing this book has been an incredible experience, and we hope reading it will be equally informative for all of you. We’re also happy to announce there will be a Kindle edition available, along with various other electronic media. Get your copy from Wiley.com, Amazon.com, Barnes & Noble, and anywhere else awesome books are sold. *more or less… some sites list the publication date as early march, but the official street date is 2/21/2011

    Read the article

  • 2.5D game development

    - by ne5tebiu
    2.5D ("two-and-a-half-dimensional"), 3/4 perspective and pseudo-3D are terms used to describe either: graphical projections and techniques which cause a series of images or scenes to fake or appear to be three-dimensional (3D) when in fact they are not, or gameplay in an otherwise three-dimensional video game that is restricted to a two-dimensional plane. (Information taken from Wikipedia.org) I have a question based on 2.5D game development. As stated before, 2.5D uses graphical projections and techniques to make fake 3d or a gameplay restricted to a two-dimensional plane. A good example is a TQ Digital made game: Zero Online (screenshot) the whole map is made of 2d images and only NPCs and players are 3d. The maps were drawn manually by hand without any 3d software rendering. As I'm playing the game I feel like I'm going from a lower part of the map (ground) to a higher one (some metal platform) and it feels like I'm moving in 3 dimensions. But when I look closely, I see that the player size didn't change and the shadow too but I'm still feeling like I'm somehow higher then before (I had rendered a simple map myself that I made in 3dmax but it didn't quite give the result I wanted). How to accomplish such an effect?

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • Retro Video Game Collection

    - by Matt Christian
    Recently I've decided, in true nerd fashion, to collect either comic books or video games.  Considering I'm much more versed in the technological arts and not in ACTUAL art, I thought collecting old video games would be an interesting venture.  After all, I am a self-described compulsive shopper (my bank statement at the end of the month has a purchase every few days).  (Don't worry, I'm not in debt and still pay my bills on time!) I went to a local video game store in Stevens Point called Gaming Generations which is a neat little shop with loads of old games for great prices.  For example, any NES cartridge on the shelf (not behind glass) is, at most, $4.99 with the cheaper ones around $1.99.  During my first round at GG, I picked up the following: NES: - Fester's Quest - Adventures of Link (Zelda 2, grey cart) - Little Nemo - Total Recall - The Goonies 2 PSX: - Galerians N64: - Mission: Impossible - Hybrid Heaven I was a little cautious, would I even like collecting old games?  As soon as I popped a few of those games in I knew right away the answer was an astounding YES!  Not only is it fun to bring back memories of all these old games, but searching for them in stores is also a blast and saying 'I have that one, I need the second one.' After finding such joy in buying these games, I decided to go search through 4-5 stores in Wausau for old games as well.  While the prices were a bit higher and selection smaller, the search was still fun.  I found the following: NES: - Maniac Mansion - T&C Surf - Chip N Dale: Rescue Rangers - TMNT (the first one) - Mission: Impossible N64: - Turok - Turok 2 Genesis: - Sonic the Hedgehog Dreamcast: - Shenmue And I found a Gamegear for $5!  Now I just need to find games for it... Tonight I will go on one more small expedition into the used, once again stopping at GG and another second hand store to see if I can find any items for my collection.

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >