Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 496/717 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Create a Self Signed Sertificate on WLS 10.3.5 Supporting SHA 256 Algorthim.

    - by adejuanc
    1) Set domain to call the keytool $. setDomainEnv.sh 2) Generate the key $ keytool -genkey -alias selfsignedcert -keyalg RSA -sigalg SHA256withRSA -keypass privatepassword -keystore identity.jks -storepass password -validity 365 What is your first and last name? [Unknown]: adejuan-desktop.cl.oracle.com What is the name of your organizational unit? [Unknown]: a What is the name of your organization? [Unknown]: e What is the name of your City or Locality? [Unknown]: i What is the name of your State or Province? [Unknown]: o What is the two-letter country code for this unit? [Unknown]: U Is CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U correct? [no]: yes 3) Export the root certificate $ keytool -export -alias selfsignedcert -sigalg SHA256withRSA -file root.cer -keystore identity.jks Enter keystore password: Certificate stored in file <root.cer> 4) Import the root certificate to the trust store $ keytool -import -alias selfsignedcert -sigalg SHA256withRSA -trustcacerts -file root.cer -keystore trust.jks Enter keystore password: Re-enter new password: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:33:22CLST 2012 until: Thu Jan 15 15:33:22 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 Trust this certificate? [no]: yes Certificate was added to keystore 5) To check the contents of the keystore keytool -v -list -keystore identity.jks Enter keystore password: ***************** WARNING WARNING WARNING ***************** * The integrity of the information stored in your keystore * * has NOT been verified! In order to verify its integrity, * * you must provide your keystore password. * ***************** WARNING WARNING WARNING ***************** Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry Alias name: selfsignedcert Creation date: Jan 18, 2012 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:42:16CLST 2012 until: Thu Jan 15 15:42:16 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 ******************************************* ******************************************* 6) In some cases, this parameter is needed in the server start up parameters. -Dweblogic.ssl.JSSEEnabled=true Otherwise, enable it from the Server configuration -> SSL -> Use JSSE checkbox.

    Read the article

  • Frustrated where I am, but not sure where to go with my career [closed]

    - by Tom Pickles
    I work (3 years now) as a lead developer for a team developing internal tools and websites for a customer account within large outsourcing company. I'm a self taught programmer and my previous incarnation was a 3rd line support guy, so I have a solid infrastructure knowledge. We use VB.Net/MSSQL/SSIS/SSRS ASP.NET (nTier) in house and I have about 8 years coding experience. Without going into too much detail, my boss is very ambitious and uses our team as his footing to get up the ladder. I've been in the team from the start and the only new dev's we have brought in have been people with a bit of VBA/VBScript experience, much to my chagrin, to bolster his empire. It's been a lot of hard work to bring them up to a standard, but there's still a lot for them to learn. This makes my life stressful as I always get the high profile/complex project work to do as other's simply cannot do it, or it'd take them twice/three times longer to do it. My boss is always seeking stuff for us to build for people who haven't asked for it, which usually get's thrown to me as I have the most experience and can pick new API's (etc) up quicker. He doesn't give us proper requirements, we don't get time to design properly before we code, he wants us to throw something (quick and dirty as he calls it) together so we can get it out ASAP. I take pride in my work so I like to do it properly, make my code clean, maintainable etc, and I train the other guys in the team to do the same. But, we always fall on our faces. The customer we drop the apps on say it doesn't do what they need (due to few requirements), or my boss doesn't like it/changes the spec, so we have to rework it, it get's drawn out, and it makes us and me look and feel like fools. We then get accused by boss of not being reactive enough to change. I've had enough. In order to get my skills and knowledge gap's filled, I've been reading Code Complete 2nd Ed (McConnell) and the Head First Design Patterns books. I'm forcing myself to move into C# from VB at home to broaden my horizons. I'm not sure where to go from here. I don't want to code all my life as I'd like to move into a higher level design/architects role at some point in time (I'm 35). Where do I/can I go from here?

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 22, 2011

    CodePlex Daily Summary for Tuesday, November 22, 2011Popular ReleasesDeveloper Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...New ProjectsAndrecorder: Andrecorder???Android???????,???????????????????,????????????????,????????!Android Tree Bulletin: Android bulletin reader in tree format.Bài t?p l?p môn HCI: Name: Ph?n m?m qu?n lý thu h?c phí tru?ng d?i h?c Công Nghi?p Hà N?i Basic Grid Collision sample in XNA: This project shows how to implement a basic grid collision in XNA. The project uses the XNA 4.0 framework and C#Club Manager: Club Manager is a web site for managing sport clubs / teams.Create email with encrypt text implement TEA encryption and Web Service: RahaTEA Mail is an application to send messages in secret. These applications implement TEA encryption and web serviceCRM 2011 Layers: Several .net layers to customize CRM 2011CTEF: China Tomorrow Education Foundation websitedns?????: ??c#???dns?????。????????,???????,??????。EAF: Extensibility Application FrameworkEnergy SBA: In order to compete with large companies for Federal contracts, small business need information. This application seeks to show standard methods of using remote APIs to integrate information into a Metro interface using services provided by the Small Business Administration (SBA)EPiOptimiser - Scan your EPiServer configuration to optimise start up times: EPiScanner scans your EPiServer configuration to optimise start ups by generating a recommended exclude list of assemblies to include in EPiServer framework config. It can be used on command line, as a custom build task or integrated into Visual Studio as an external tool.FreeIDS - Free Intrusion Detection System: Don't want someone to use your computer? Don't want to use a system password? Want to see when someone accessed your computer? Time/Date? FreeIDS is it!FtpServerAdministrator: FtpServerAdministrator makes it easier to administer some ftp server by code, although it can only be used for FileZilla server now. It's developed in C#.GreenPoint Online: Tools and components that help you customize an Office 365 / SharePoint Online Environment.HCC C# Workshop: This project contains the code for the exercises of the HCC C# WorkshopKsigDo - Real time view model syncing across user screens: KsigDo show real time view model syncing across user screens - using ASP.NET, Knockout and SignalR. Real time data syncing across user views *was* hard, especially in web applications. Most of the time, the second user needs to refresh the screen, to see the changes made by first user, or we need to implement some long polling that fetches the data and does the update manually. Now, with SignalR and Knockout, ASP.NET developers can take advantage of view model syncing across users, that...lineseven: ???????????????。Mail Size Labeler for GMail: A small utility that labels large e-mails on your gmail account. This utility scan you gmail account, and adds labels to large e-mail so you can clean your mailbox and free space. The labels this utility adds are: Size 1M-2M Size 2M-5M Size 5M-10M Size 10M-15M Size 15M plus Note: a single e-mail thread may get multiple labels if different e-mails of the thread fit different filters.MathService: Complex digits, standart class extentions etc.MyGameProject: gamesMySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ nl: Nutri Leaf Devomr.event.js: Simple js event injecterPastebin4DotNet: This project is an example of how to consume an API, in this case I consummed the Pastebin API.Pomelo: Pomelo is a website example.QuickDevFrameWork: ????????,??,??,????,ioc ?????postsharp?aopReadable Passphrase Generator: Generates passphrases which are (mostly) grammatically correct but nonsensical. These are easy to remember but difficult to guess (for humans or computers). Developed in C# with a KeePass plugin, console app and public API.Rosyama.ru for Windows Phone 7: ?????????? Windows Phone 7 ??? ???????? ???????? ?? ???? rosyama.ru. ?????????? ??????? ?????????? ? ???????? ????????? ???????. SimpleBatch: As the name suggests, this is a simple batch framework allowing you to define batch jobs in XML format. Thus far, contains a basic selection of processors such as the following; File Email SQL (SQL Server Client) SharePoint Document Library Custom ProcessorSite de Notícias: Projeto de faculdade que consiste na criação de um site de notícias.SPWikiProvisioning: Create update and delete SharePoint wiki pages using feature activation and deactivation handlers.SVN Automated Control With C#: I Created this libaray because I need to control Tortoise SVN automactically with out an interface for my own build server and could not find any resuilts on google to achive this task so I went about creating this libaray which dos most of the task's that I needed. I round that you could control SVN by command line so using that as my basic idear I went about coding the most common commands for SVN most of the commads are done but not all. if you like this libaray then please use it we...TremplinCMS: TremplinCMS is a CMS framework for ASP .NET 4.vlu0206sms: SMSMaker by team0206 developingWCF DataService RequestStream Access on webInvoke HTTP POST: This library provides access to the message body request stream of a WCF Data Service (formerly ADO.NET Data Service), which is not possible with the original WCF Data Service class. You are enabled passing data (e.g. Json, files) via HTTP POST to the request body. It uses the operation context (DbContext) provided by the DataService<T> class to get access to the resquest stream.WebOS: Welcome to join us to build our os projectWp7StarterDantas: Iniciando com Wp7WpfCollaborative3D: WpfCollaborative3DXNA Content Preprocessor: The XNA Content Preprocessor allows you to compile all of your XNA assets outside of your normal XNA project. This means more time building your game or app instead of your content.

    Read the article

  • Getting Internet Explorer to Open Different Sets of Tabs Based on the Day of the Week

    - by Akemi Iwaya
    If you have to use Internet Explorer for work and need to open a different set of work-specific tabs every day, is there a quick and easy way to do it instead of opening each one individually? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader bobSmith1432 is looking for a quick and easy way to open different daily sets of tabs in Internet Explorer for his work: When I open Internet Explorer on different days of the week, I want different tabs to be opened automatically. I have to run different reports for work each day of the week and it takes a lot of time to open the 5-10 tabs I use to run the reports. It would be a lot faster if, when I open Internet Explorer, the tabs I needed would automatically load and be ready. Is there a way to open 5-10 different tabs in Internet Explorer depending on the day of the week? Example: Monday – 6 Accounting Pages Tuesday – 7 Billing Pages Wednesday – 5 HR Pages Thursday – 10 Schedule Pages Friday – 8 Work Summary/Order Pages Is there an easier way for Bob to get all those tabs to load and be ready to go each day instead of opening them individually every time? The Answer SuperUser contributor Julian Knight has a simple, non-script solution for us: Rather than trying the brute force method, how about a work around? Open up each set of tabs either in different windows, or one set at a time, and save all tabs to bookmark folders. Put the folders on the bookmark toolbar for ease of access. Each day, right-click on the appropriate folder and click on ‘Open in tab group’ to open all the tabs. You could put all the day folders into a top-level folder to save space if you want, but at the expense of an extra click to get to them. If you really must go further, you need to write a program or script to drive Internet Explorer. The easiest way is probably writing a PowerShell script. Special Note: There are various scripts shared on the discussion page as well, so the solution shown above is just one possibility out of many. If you love the idea of using scripts for a function like this, then make sure to browse on over to the discussion page to see the various ones SuperUser members have shared! Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Gparted can't create partition table

    - by William
    Here's what the problem is. About a day or so ago I used Gparted live cd to create 3 NTFS primary partitions on my external 500 gig Goflex and one extended with 2 logical partitiones. I had planned to install windows 8 on the first partition, then ubuntu and kubuntu on the other 2. After I finished partitioning my drive with gparted, I booted into windows vista to make my bootable windows 8 usb to install it with, I also decided to check to make sure all my partitions were working properly. Then I found they were, and they weren't. My 50 gig first partition I had planned to install windows on showed up normal and the 300 gigs of space left in the extended partition did as well, the rest showed up as raw. So I figured alright, something went awal while making the partitions, so I booted up gparted once again. Then to my surprise gparted showed the entire drive as unallocated, and when I refreshed the list, it showed as all the partitions I had made earlier, buy with a exclamation mark by them all. So I figured ok, might be a problem with the partition table as I'd seen a similar problem in past on a drive that was not partitioned at all, so I decided to create a new partition table and take the time out again to sit and wait. Then I got a message saying gparted could not create the partition table, followed by it showing the entire drive as formatted into ntfs. After that I figured ok I'll take a break, come back in a hour, maybe it's something I did. So a hour later I came back after having booted up windows, plugged the drive in to see if by some miracle windows could access the drive. In disk management when I plugged the drive in, it would freeze attempting to read the drive, as I'd seen in the past with raw disks, yet when I unplugged it I got a glimpse of disk management showing it as a perfectly fine ntfs file system on the drive followed by a "you must format disk K in order to use it". So I then was assured the disk was raw as that is what had happened in the past, followed by a new partition table through gparted to fix the problem and a 10 hour format in windows. So I once again booted up gparted, to get the message "error fsyncing/closing/dev/sdg:input/output error" followed by "error opening dev/sdg No such file in directory" after I refreshed and somehow saw the disk show up as perfectly fine ntfs and then tried to create a new partition table to try to wipe out all my problems and start over again. And not gparted only shows the drive there about 1/10 refreshes the rest I get the directory error. If anybody can assist me in any way shape or form I will be thankful.

    Read the article

  • Should I be looking for an alternative to Zen Cart as my business grows?

    - by MarkS
    I created a business website for a family business which is growing. It's my family, and I'm a software developer, but I don't want to rebuild the wheels or be a shopping cart programmer. For this business, I need the web store to "just work", but... it gets complicated... There are two parts of this business website. One of them is driven by Wordpress and I use the awesome Thesis theme. This is modern, flexible, and saves me a lot of time from doing custom coding and styling. I couldn't be more pleased with this arrangement. The other part of the site is a Zen Cart store. It's administration and it's flexibility is frustrating and archaic Web 1.0. For the past few years, I keep hearing that the developers are working on a 2.0 version of Zen Cart, but they haven't communicated anything significant in the past few years other than to say, "When it's ready, we'll let you know." What I'm looking for in a cart, I would need to install 6-10 additional mods, and would need to do a lot of custom coding. I'm now willing to pay for a top-notch e-commerce solution for a small business that we can grow up into a larger business over time. Requirements: Extremely flexible shipping that let's us set up rules per product/category, tables of rates, calculated rates, max package weighs, etc. (flexibility like that available with CEON Advance Shipping Module for Zen Cart Coupons and gift certificates Manual order entry for phone orders Multi-channel support (We also sell on Amazon, eBay, use Google Base and we want to maintain one set of inventory and have it kept current) Decent SEO features Reviews and star-ratings on products Easy social networking features for sharing, following, liking, etc) Easy integration with AdWords and analytics tracking Modern and very usable product and store administration (Like I was saying, I'm spoiled by Wordpress and Thesis) At the end of the day, I don't care if it's a hosted solution or if I have to host it myself. I just want something that is going to stay up-to-date, regularly be maintained and improved, and if I have to update it, things like the one-click update present in Wordpress is something it has to have. Professional Webmasters, if you had to run a store / website, but you had to spend your time focusing on your sales and marketing efforts rather than diffing php files and copying and tweaking them to change even the slightest details of your site, what would you choose?

    Read the article

  • Documentation in Oracle Retail Merchandising System (RMS) and Oracle Retail Fiscal Management System (ORFM), Release 13.2.4

    - by Oracle Retail Documentation Team
    The Patch Release 13.2.4 of the Oracle Retail Merchandising System (RMS) and its module, Oracle Retail Fiscal Management (ORFM)  is now available from My Oracle Support. End User Documentation Enhancements The following summarize the highlights of changes made to the documentation in conjunction with the new Brazil-related functionality: Foundation chapter in the Oracle Retail Merchandising System (RMS)/Sales Audit (ReSA) Brazil Localization User GuideThis chapter was updated with a non-base Localization Flexible Attribution Solution (LFAS) section that addresses the addition of several new custom attributes to Items and Suppliers through non-base LFAS for Brazil; it also addresses the extension of the Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (ORFM).  ORFM User GuideThe Purchase Order chapter was updated to include schedule related updates for a Nota Fiscal. The Fiscal Documents chapter was updated to include information on creating a new NF and searching for details using Vendor Product Number. Oracle Retail Fiscal Management/RMS Brazil Localization Implementation GuideThe Implementation Checklist chapter was updated with a note on multi-currency functionality. The Batch Processes chapter was updated with information on the NF EDI batch. The following summarize the highlights of changes made to the documentation in conjunction with the new technical certifications (see the RMS 13.2.4 Release Notes for more information): Installation Guides for RMS and for ORFM/RMS BrazilThese installation guides were updated extensively to account for the multiple technical certification enhancements in 13.2.4. White Paper: How to Upgrade from WebLogic11g 10.3.3 to WebLogic11g 10.3.4  (Doc ID: 1432575.1)See the previous blog entry regarding this new White Paper. New Documents on My Oracle Support for Brazil Localization Overview and Interfaces Tax Vendor Integration (Doc ID: 1424048.1)Oracle chooses to integrate with a third party tax expert to delivery the Brazilian solution. Oracle has built the Retail Tax Integration layer (RTIL) as the key integration component to support the integration of Oracle suite of products with external tax vendors. This paper addresses the RTIL integration interfaces with TaxWeb, providing guidance on the typical integration interfaces and operations that must be supported by other tax solutions in the Brazilian market. Oracle Retail Fiscal Management/RMS Brazil Localization: Localization Flexible Attribute Solution (LFAS) (Doc ID: 1418509.1)The white paper covers the definition of custom attributes in Localization Flexible Attribute Solution (LFAS) and enables retailers to perform data conversion changes. Retailers can add several new custom attributes to Items and Suppliers through non-base LFAS for Brazil and extend Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (RFM). Documents Published in RMS and ORFM Release 13.2.4 Oracle Retail Merchandising System Release Notes Oracle Retail Merchandising System Installation Guide Oracle Retail Merchandising System User Guide and Online Help Oracle Retail Sales Audit (ReSA) User Guide and Online Help Oracle Retail Merchandising System Operations Guide Oracle Retail Merchandising System Data Model Oracle Retail Merchandising Batch Schedule Oracle Retail Merchandising Implementation Guide Oracle Retail POS Suite 13.4.1 / Merchandising Operations Management13.2.4 Implementation Guide Oracle Retail Fiscal Management Data Model Oracle Retail Fiscal Management/RMS Brazil Localization Installation Guide Oracle Retail Fiscal Management/RMS Brazil Localization Implementation Guide Oracle Retail Fiscal Management User Guide and Online Help

    Read the article

  • Bluetooth Dial-Up Networking using Blueman

    - by leemes
    I want to configure a dial up network connection via bluetooth to my phone in order to access the internet. I use Lubuntu 12.04 (Ubuntu with LXDE) which has the Network Manager Applet and Blueman applet installed. I guess these are the same tools than on an Ubuntu installation, hence I ask my question on this site. My phone is a Sony Ericsson W810i, my laptop is a Lenovo S10-2, my mobile phone provider is o2 Germany. I scanned for my mobile phone using the Blueman applet. I connected the dial-up network via the context menu - Serial Ports - Dial-up Networking. A notification bubble says that the connection is available on the interface named ppp0. ipconfig is telling something different: There is no ppp0 or something similar. I only see my eth0 (wired ethernet), eth1 (wifi) and lo interfaces. Of course, I can't ping google.com as the interface really seems to be not present at all. When the dial-up network is being connected, my mobile phone says that it connects to the internet. Afterwards, I see the active connection on the phone's screen. When successfully connecting with the phone using another computer, it behaves exactly the same, so I guess that the phone isn't the problem. I don't know if I configured the Dial-Up correctly. I use the phone number *99# which is very common on most mobile ISPs. I use the APN which my ISP is telling me to use. (I can't find the number on their support page, so I just use the default value *99#.) My mobile ISP is o2 Germany. There are How-Tos out there which use the Network Manager Applet to setup a bluetooth dial-up connection, but I can't see any bluetooth devices in the context menu as on the screenshots in those How-Tos. Do you have any suggestions what might be wrong / what I should try? EDIT: When choosing "Network Access Point" in the device's context menu instead of Serial Ports - Dial-Up Networking, an interface bnep0 appears. However, neither an IPv4 address is assigned for that interface (but IPv6), nor the phone connects to the internet. Am I missing something? Can I connect to the internet after setting up this network connection?

    Read the article

  • Copying & Pasting Rows Between Grids in SQL Developer

    - by thatjeffsmith
    Apologies for slacking on the blogging front here lately. Still mentally hung over from Open World, and lots of things going on behind the scenes here in Oracle-land. Whilst (love that word) blogging is part of my job, it’s not the ONLY part of my job So a super-quick and dirty ‘trick’ this morning. Copying Query Result Record as New Row in Table Copy and paste is something everyone ‘gets.’ I don’t know we have to thank for that, whether it’s Microsoft or Xerox, but it’s been ingrained in our way of dealing with all things computers. Almost to the detriment of some of our users – they’ll use Copy and Paste when perhaps our Export feature is superior, but I digress. Where it does work just fine is when you want to create a new row in your table that matches a row you have retrieved from an executed query. Just click in the gutter or row number to get the entire row selected Once you have your data selected, do your thing, i.e. ctrl+C or Command/Apple+C or whatever. Now open your view or table editor, go to the data page, and ask for a new row. New record, no data Paste in the data from the clipboard. It’s smart enough to paste the separate values out to the separate columns. The clipboard saves the day, again. If your columns orders are different, just change the order in the grids. If you have extra information, don’t copy the entire row. I know, I know – Jeff this is too simple, why are you wasting our time here? It seems intuitive, but how many of you actually tried this before reading it just now? I seem to get more positive feedback from the very basic user interface 101 tips than the esoteric click-click-click-ctrl-shift-click tricks I prefer to post. Lots of interesting stuff on tap, so stay tuned!

    Read the article

  • Create Custom Playlists in Windows Media Player 12

    - by DigitalGeekery
    A playlist is a group of songs or media files that are grouped together based on a theme. Today we’ll look at how to create your own custom playlists in Windows Media Player 12. Create Custom Playlists Open Windows Media Player and switch to the Library view. Click on the Play tab at the top right to reveal the List pane.   If you currently have songs listed on the List pane, you can remove them by clicking Clear list.   To add songs to your playlist, right-click on the song title, select Add to, and then click Play list. You can also drag and drop the song title right onto the play list area. Hold down the Control [Ctrl] key while clicking to select more than one track at a time.   Changing the Playlist Order You can click and drag each item in your playlist to move it up or down.   You can also right click on the title and select Move up or Move down, or to completely remove a track from your playlist. You have the option to shuffle your list by clicking the Options list icon and selecting Shuffle list from the dropdown list. By selecting Sort list by you can sort by Title, Artist, Album, Release date, and more. Saving and naming your playlist To save your playlist, click on the Save list button. You’ll be prompted to enter a name for your playlist in the text box. Click away when you are finished. Windows Media Player will display your most recent playlists in the Navigation panel. Simply select the playlist anytime you want to listen to it.   Conclusion Custom playlists are a great way to group your music by themes such as mood, genre, activity, season, and more. If you are new to Windows Media Player 12, check out our post on managing your music in Windows Media Player. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Install and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeWhat are wmpnscfg.exe and wmpnetwk.exe and Why Are They Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • Twitte API for Java - Hello Twitter Servlet (TOTD #178)

    - by arungupta
    There are a few Twitter APIs for Java that allow you to integrate Twitter functionality in a Java application. This is yet another API, built using JAX-RS and Jersey stack. I started this effort earlier this year and kept delaying to share because wanted to provide a more comprehensive API. But I've delayed enough and releasing it as a work-in-progress. I'm happy to take contributions in order to evolve this API and make it complete, useful, and robust. Drop a comment on the blog if you are interested or ping me at @arungupta. How do you get started ? Just add the following to your "pom.xml": <dependency> <groupId>org.glassfish.samples</groupId> <artifactId>twitter-api</artifactId> <version>1.0-SNAPSHOT</version></dependency> The implementation of this API uses Jersey OAuth Filters for authentication with Twitter and so the following dependencies are required if any API that requires authentication, which is pretty much all the APIs ;-) <dependency> <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-client</artifactId>     <version>${jersey.version}</version> </dependency> <dependency>     <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-signature</artifactId>     <version>${jersey.version}</version> </dependency> Once the dependencies are added to your project, inject Twitter  API in your Servlet (or any other Java EE component) as: @Inject Twitter twitter; Here is a simple non-secure invocation of the API to get you started: SearchResults result = twitter.search("glassfish", SearchResults.class);for (SearchResultsTweet t : result.getResults()) { out.println(t.getText() + "<br/>");} This code returns the tweets that matches the query "glassfish". The source code for the complete project can be downloaded here. Download it, unzip, and mvn package will build the .war file. And then deploy it on GlassFish or any other Java EE 6 compliant application server! The source code for the API also acts as the javadocs and can be checked out from here. A more detailed sample using security and several other API from this library is coming soon!

    Read the article

  • A Complete Customer Experience Solution (3 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development In my previous post, I talked about taking three concrete steps to improve your customers' overall experiences: 1) understand your customer, 2) empower your ecosystem, and 3) adapt your business. To do these effectively and efficiently, it's important to find the right technology that can bridge the gaps across your channels, interactions, departments, and repositories. Oracle has spent the past three years and more than six billion dollars acquiring and developing some of the world's best-of-breed applications. The result is the most comprehensive customer experience (CX) portfolio offering in the World - bar none: ATG Best in Class Selling Experiences Fatwire Best in Class Marketing Experiences Inquira Best in Class Support Experiences Endecca Best in Class Search Experiences RightNow Best in Class Service Experiences Vitrue & Involver Best in Class Social Marketing Collective Intellect Best In Class Social Listening We don't expect organizations to eat the CX elephant in one bite, nor should they try to. There are key strategic initiatives within each of the four main pillars of our customer experience offering for which we deliver solutions: 1. Customer Experience for Marketing Social Listening and Engagement Social Marketing Marketing Websites Demand Generation and Lead Management Marketing and Loyalty Management 2. Customer Experience for Commerce Search, Navigation & Content Delivery Cross-Channel Commerce Targeting & Product Recommendations Social Commerce Order Management & Fulfillment Retail Store Operations 3. Customer Experience for Sales Sales Force Automation Social Selling Territory & Quota Management Revenue Forecasting Partner Relationship Management Quote to Cash Incentive Compensation 4. Customer Experience for Service Cross-Channel Customer Service Knowledge Management Social Customer Service Eligibility Management Contracts, Assets, and Entitlements Industry-Specific Solutions eBilling Oracle's customer experience portfolio is socially infused at each layer of our pillars rather than simply bolted on as a side process. This combines with the power of the Cloud to run the parts of the solution that need the access, efficiency, and agility from a managed infrastructure. You can get the compliance control from on-premise backbone infrastructure systems that run your business and don't change that often. Please take advantage of our teams of Oracle customer experience professionals and our key agency and technology partner ecosystem. They can help you develop strategic solution roadmaps that build and deliver customer experience and that are tailored to your business needs and objectives. No one has built a better customer service portfolio to manage the entire customer journey than Oracle. It is backed by CX thought leadership programs, a commitment from our executives, and a worldview that your technology decisions must be driven by your customer experiences to succeed. If you’d like to follow up on this conversation, please leave a comment or contact me at [email protected]. You can get more information on Oracle’s complete customer experience solution here.

    Read the article

  • Second Edition of Regular Expressions Cookbook Has Been Published

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The first edition of Regular Expressions Cookbook was published in May of 2009. It quickly became a bestseller, briefly holding the #1 spot in computer books on Amazon.com. It also had staying power. The ebook version was O’Reilly’s top seller during the whole year of 2010. So it’s no surprise that our editor at O’Reilly soon contacted us for a second edition. With Steven and I always being very busy, those plans were delayed until finally both of us found the time to update the book. Work started in January. Today you can buy your own copy of the second edition of Regular Expressions Cookbook. O’Reilly’s online shop sells the eBook in DRM-free ePub, Mobi, and PDF formats for $39.99 and the print version for $49.99. These are the list prices for the eBook and the print book. If you’re looking for a discount and free shipping of the print book, you can pre-order on one of the various Amazon sites. Deliveries should start soon. The discount rates differ and are subject to change. Amazon will also pay me an affiliate commission if you use one of these links, which pretty much doubles the income I get from the book. Amazon.com. Free shipping to the USA. Amazon.co.uk. Free shipping to the UK and Ireland. Amazon.fr. Free shipping to France, Monaco, Luxembourg, and Belgium. Amazon.de. Free shipping to Germany, Austria, Switzerland, Luxembourg, Liechtenstein, Belgium, and The Netherlands. If you don’t want to wait for the print book to arrive, the Kindle edition is already available for instant delivery. The Kindle edition works on Amazon’s Kindle hardware, and on PCs via Amazon’s Kindle software (free download). Amazon.com Amazon.co.uk Amazon.fr Amazon.de I’ll blog more about the book in the coming days and weeks with details about what’s new in the second edition.

    Read the article

  • Au revoir, Python?

    - by GuySmiley
    I'm an ex-C++ programmer who's recently discovered (and fallen head-over-heels with) Python. I've taken some time to become reasonably fluent in Python, but I've encountered some troubling realities that may lead me to drop it as my language of choice, at least for the time being. I'm writing this in the hopes that someone out there can talk me out of it by convincing me that my concerns are easily circumvented within the bounds of the python universe. I picked up python while looking for a single flexible language that will allow me to build end-to-end working systems quickly on a variety of platforms. These include: - web services - mobile apps - cross-platform client apps for PC Development speed is more of a priority at the time-being than execution speed. However, in order to improve performance over time without requiring major re-writes or architectural changes I think it's imperative to be able to interface easily with Java. That way, I can use Java to optimize specific components as the application scales, without throwing away any code. As far as I can tell, my requirement for an enterprise-capable, platform-independent, fast language with a large developer base means it would have to be Java. .NET or C++ would not cut it due to their respective limitations. Also Java is clearly de rigeur for most mobile platforms. Unfortunately, tragically, there doesn't seem to be a good way to meet all these demands. Jython seems to be what I'm looking for in principle, except that it appears to be practically dead, with no one developing, supporting, or using it to any great degree. And also Jython seems too married to the Java libraries, as you can't use many of the CPython standard libraries with it, which has a major impact on the code you end up writing. The only other option that I can see is to use JPype wrapped in marshalling classes, which may work although it seems like a pain and I wonder if it would be worth it in the long run. On the other hand, everything I'm looking for seems to be readily available by using JRuby, which seems to be much better supported. As things stand, I think this is my best option. I'm sad about this because I absolutely love everything about Python, including the syntax. The perl-like constructs in Ruby just feel like such a step backwards to me in terms of readability, but at the end of the day most of the benefits of python are available in Ruby as well. So I ask you - am I missing something here? Much of what I've said is based on what I've read, so is this summary of the current landscape accurate, or is there some magical solution to the Python-Java divide that will snuff these concerns and allow me to comfortably stay in my happy Python place?

    Read the article

  • Diagnosing the problem when Canon printer fails to print under Ubuntu

    - by MMA
    I understand that the issue of Canon printer under Linux has a number of posts. Actually one of these, was started by me. After inputs from others I have successfully printed using Canon LBP6000 from my Ubuntu machine for around a year. If it failed to print, restarting the daemon using this homemade script coaxed the printer to print. #!/bin/bash pkill -9 -x ccpd pkill -9 -x captmoncnabc /etc/init.d/ccpd start /etc/init.d/ccpd status Recently, I am not successful any more, or successful on a limited and sporadic basis. Sometimes it prints when turned on after logging in, sometimes when the driver is reinstalled. I keep on trying the random steps (call abracadabras) until I get success. Again, not always success comes. I frustrate on for hours only to get single page printed. I loose precious time on the issue of printing. I have read and read all the documents available over the Internet. However, if you please notice, none of the guides, articles, tutorials (these are too many to list here) seem to be dealing with diagnosing the problem when it fails to print. They tell you where to find the drivers, how to install them, or which script to run to make the installation process automatic. Yes, some of the articles or comments suggest a step to try, without any systematic order. But these fail to suggest a step based on symptoms, mostly. This morning, my Canon LBP6000 failed to print. After sometime, there was a message for system error, details of which was found to be something like this. When I search for this error (c3pldrv crashed with SIGSEGV in write ()), I find a number of articles including this one. None of these are actually helpful. Mostly, these are 'me too', 'tell me if you find anything'. Running captstatusui -P LBP6000 produced this, Yes, the printer is connected and actually turned on. I believe that there a number of frustrated Canon printer users out there like me. But there is not a step by step definitive guide to systematically diagnose a non-printing printer. Do you think that you can provide your diagnosing inputs so that a systematic document can be built? May be we will want the Ubuntu users to stay away from Canon printers. But as I believe, as a Linux user for more than fifteen years, such a scenario is not acceptable any more. May be this was acceptable in the infancy days of Linux, but not today. I am using Ubuntu 12.04, by the way, I prefer LTS versions.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >