Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 500/996 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Do game-theoretic considerations stand in the way of this market-based game-mechanic achieving its goals?

    - by BerndBrot
    Mechanic The mechanic is called "market manipulation" and is supposed to work like this: Players can enter the London Stock Exchange (LSE) LSE displays the stock prices of 8 to 10 companies and derivatives. This number is relatively small to ensure that players will collide in their efforts to manipulate the market in their favor. The prices are calculated based on real world prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires spending certain in-game resources and is therefore limited. The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Goals Players are supposed to collide (and have incentives to collide) in their efforts to manipulate the market in their favor, especially when it comes to manipulation efforts by different groups. Prices should not resolve around any equilibrium points. The more variance the better. Band-wagoning should always involve risk (recognizing that prices start rising should not be a sure sign that they will keep rising so that everybody can make easy profits even when they don't manipulate the market themselves) Question Are there any game-theoretic considerations that prevent the mechanic from achieving these goals?

    Read the article

  • Zipcodes in CSV Generation

    - by BRADINO
    When exporting to CSV format, then opening in a spreadsheet program like Excel zipcodes that start with a zero or zeroes have the preceding zeros stripped off. Obviously it is because the spreadsheet sees that column as integers and preceding zeros in integers are useless. A quick and dirty trick to force Excel (hopefully you are using OpenOffice) to display the full zipcode, we wrap it in double quotes and put an equal sign in front of it, to force it to be a string like this: $zipcode = 00123; $data = '="' . $zipcode . '"' ; So if you are doing the straight query to CSV export, using the fputcsv function it would look something like this. Basically just overwrite the value in the row and then continue along. while ($row = mysql_fetch_assoc($query)){         $row['zipcode'] = '="'.$row['zipcode'].'"';     fputcsv($output, $row); } php csv zipcode csv number csv force string

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • Repairing yum's repositories on a RHEL5.

    - by The Rook
    I am using RHEL5 and yum is missing many packages, such as apache, php, and all php libraries . I have added the rpmforge repository, but i am still missing these packages. This is an i686 machine and there might not be many i686 packages available, I think that if i force an i386 i'll have serious problems. How do I make sure I have a large number of compatible packages on a RHEL5 system? I didn't install this system, is it normal for RHEL5 to have virtually no useful packages in yum? How do RHEL5 administrators use yum without introducing conflicts with currently installed packages? Should I ditch yum and use apt? Thanks!

    Read the article

  • Oracle HRMS API –Update Employee Fed Tax Rule

    - by PRajkumar
    API --  pay_federal_tax_rule_api.update_fed_tax_rule Example -- DECLARE    lb_correction                              BOOLEAN;    lb_update                                   BOOLEAN;    lb_update_override                 BOOLEAN;    lb_update_change_insert      BOOLEAN;    ld_effective_start_date            DATE;    ld_effective_end_date             DATE;    ln_assignment_id                     NUMBER                    := 33561;    lc_dt_ud_mode                          VARCHAR2(100)     := NULL;    ln_object_version_number     NUMBER                    := 0;    ln_supp_tax_override_rate    PAY_US_EMP_FED_TAX_RULES_F.SUPP_TAX_OVERRIDE_RATE%TYPE;    ln_emp_fed_tax_rule_id         PAY_US_EMP_FED_TAX_RULES_F.EMP_FED_TAX_RULE_ID%TYPE; BEGIN    -- Find Date Track Mode    -- -------------------------------    dt_api.find_dt_upd_modes    (   -- Input data elements        -- ------------------------------       p_effective_date                   => TO_DATE('12-JUN-2011'),       p_base_table_name            => 'PER_ALL_ASSIGNMENTS_F',       p_base_key_column           => 'ASSIGNMENT_ID',       p_base_key_value               => ln_assignment_id,       -- Output data elements       -- -------------------------------       p_correction                          => lb_correction,       p_update                                => lb_update,       p_update_override              => lb_update_override,       p_update_change_insert   => lb_update_change_insert   );    IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )  THEN      -- UPDATE_OVERRIDE      -- --------------------------------      lc_dt_ud_mode := 'UPDATE_OVERRIDE';  END IF;    IF ( lb_correction = TRUE )  THEN     -- CORRECTION     -- ----------------------     lc_dt_ud_mode := 'CORRECTION';  END IF;    IF ( lb_update = TRUE )  THEN      -- UPDATE      -- -------------      lc_dt_ud_mode := 'UPDATE';  END IF;       -- Update Employee Fed Tax Rule   -- ----------------------------------------------   pay_federal_tax_rule_api.update_fed_tax_rule   (   -- Input data elements       -- -----------------------------       p_effective_date                        => TO_DATE('20-JUN-2011'),       p_datetrack_update_mode   => lc_dt_ud_mode,       p_emp_fed_tax_rule_id         => 7417,       p_withholding_allowances  => 100,       p_fit_additional_tax                => 10,       p_fit_exempt                               => 'N',       p_supp_tax_override_rate     => 5,       -- Output data elements       -- --------------------------------      p_object_version_number       => ln_object_version_number,      p_effective_start_date               => ld_effective_start_date,      p_effective_end_date                => ld_effective_end_date   );    COMMIT; EXCEPTION           WHEN OTHERS THEN                          ROLLBACK;                          dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • CodePlex Daily Summary for Wednesday, January 12, 2011

    CodePlex Daily Summary for Wednesday, January 12, 2011Popular ReleasesGoogle URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmljGestures: a jQuery plugin for gesture events: 0.81: added event substitution for IE updated index.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcUmbraco CMS: Umbraco 4.6: The Umbraco 4.6 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login timeout ...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...Hawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch...New ProjectsASP.NET MVC Scaffolding: Scaffolding package for ASP.NETAstor: OData Explorer: OData ExplorerBasic Users Community: A simple user community with threads and posts.Bukkit Server Manager: BSM makes server managing easy we have multiple type and database support including: MySql, SQLite types: VPS, Dedicated, Home PCCh4CP: Chamber 4 control programDotNetNuke Telerik Library: A set of Telerik wrappers for DotNetNuke module developers to utilize which aren't yet included as of 5.6.1. Eventually this will be offloaded to the core. Enjoy Life: our fypFolderSizeChecker: It suppose to check the size of big folders in specific partition and help user to find the most disk usage location. (It's simple project so please don't expect big and complex algorithms)HomeTeamOnline: This is project of HomeTeamOnlineICSWorld: This is project of ICSWorldIMAP Client for .NET 4.0 using LumiSoft: Develop an IMAP client using this sample project based on the LumiSoft .NET open source project. This project compiles in .NET 4.0 and demonstrates how to pull email using IMAP. The purpose of the project is for email auto processing.MUIExt (Multilingual User Interface Extender): MUIExt makes it easier for SharePoint 2010 users to create multilingual sites. You'll no longer have to live with the MUI limitations or have to manage variations. It's developed in csharp.Phoenix Service Bus: The goal of this pServiceBus is to provide an API and Service Components that would make implementing an ESB Infrastructure in your environment. It's developed in C#, and also have API written for Javascript Clients PhotoSnapper: Home project just to rename photos or .mov files in a folder starting from from a user defined number.redditfier: A windows application to notify redditors with new posts.SharePoint Field Updater: Automatically update sub fields according to a lookup field. For example: Updating field "Contact" will automatically put "Contact Email" and "Address" in the appropriate text fields.TXLCMS: emptyUmbraco Spark engine: Spark macro engine for UmbracoUrdu Translation: Urdu Translation Project WFTestDesign: BizUnit WF is based on BizUnit solution that allows user to define a test using WorkFlow UI, custom activities designed in this extension and general Workflow activities.It's enable also to use breakpoint in test. It's developed in C#.WPF Date Range Slider: A WPF Date Range Slider user control written with C# to allow your users to choose a range of dates using a double thumbed slider control.WPMind Framework for WP7: This project is used to provide some Windows Phone 7 controls for Windows Phone 7 Silverlight developer. Please join us if you are interested in this project.

    Read the article

  • Wpf vs WinForms for a vb programmer? [closed]

    - by Jeroen
    I am asked by a client to develop an application that is basically a screen on which the user can choose several items to pass the time (used in holding cells in mental hospitals for example). The baisc idea is as follows: TV (choosing this will provide the user with a number of TV streams from the interweb) Radio (...) Games (serveral flash games, also from the interweb) Music (play local music or streams) Draw something (not the game) Create an email Choose lighting settings for the room etc. etc. I am torn between WinForms and WPF for this project. It seems that WPF is the way to go since there is quite a bit of rich media involved but I have a 15 year VB background. The project obviously has a dead line and certain budget that I cannot cross and if I can avoid starting from scratch with some thing that will be nice. Is WPF worth it in this particular case or can I use WinForms with the incorperation of WPF controls? I would very much like to hear your thoughts/comments/suggestions!

    Read the article

  • Unity not Working 14.04

    - by Back.Slash
    I am using Ubuntu 14.04 LTS x64. I did a sudo apt-get upgrade yesterday and restarted my PC. Now my taskbar and panel are missing. When I try to restart Unity using unity --replace Then I get error: unity-panel-service stop/waiting compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core unity-panel-service start/running, process 3906 compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl compiz (core) - Info: Unity is fully supported by your hardware. compiz (core) - Info: Unity is fully supported by your hardware. compiz (core) - Info: Starting plugin: opengl libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/i965_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/i965_dri.so failed (${ORIGIN}/dri/i965_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/i965_dri.so failed (/usr/lib/dri/i965_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: i965_dri.so libGL error: driver pointer missing libGL error: failed to load driver: i965 libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/swrast_dri.so failed (${ORIGIN}/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session I/O warning : failed to load external entity "/home/sumeet/.compiz/session/10de541a813cc1a8fc140170575114755000000020350005" compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Info: Loading plugin: unityshell compiz (core) - Info: Starting plugin: unityshell WARN 2014-06-02 18:46:23 unity.glib.dbus.server GLibDBusServer.cpp:579 Can't register object 'org.gnome.Shell' yet as we don't have a connection, waiting for it... ERROR 2014-06-02 18:46:23 unity.debug.interface DebugDBusInterface.cpp:216 Unable to load entry point in libxpathselect: libxpathselect.so.1.4: cannot open shared object file: No such file or directory compiz (unityshell) - Error: GL_ARB_vertex_buffer_object not supported ERROR 2014-06-02 18:46:23 unity.shell.compiz unityshell.cpp:3850 Impossible to delete the unity locked stamp file compiz (core) - Error: Plugin initScreen failed: unityshell compiz (core) - Error: Failed to start plugin: unityshell compiz (core) - Info: Unloading plugin: unityshell X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 3 (X_GetWindowAttributes) Resource id in failed request: 0x3e000c9 Serial number of failed request: 10115 Current serial number in output stream: 10116 Any help would be highly appreciated. EDIT : My PC configuration description: Portable Computer product: Dell System XPS L502X (System SKUNumber) vendor: Dell Inc. version: 0.1 serial: 1006ZP1 width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: administrator_password=unknown boot=normal chassis=portable family=HuronRiver System frontpanel_password=unknown keyboard_password=unknown power-on_password=unknown sku=System SKUNumber uuid=44454C4C-3000-1030-8036-B1C04F5A5031 *-core description: Motherboard product: 0YR8NN vendor: Dell Inc. physical id: 0 version: A00 serial: .1006ZP1.CN4864314C0560. slot: Part Component *-firmware description: BIOS vendor: Dell Inc. physical id: 0 version: A11 date: 05/29/2012 size: 128KiB capacity: 2496KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy360 int13floppy1200 int13floppy720 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot smartbattery biosbootspecification netboot *-cpu description: CPU product: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz vendor: Intel Corp. physical id: 19 bus info: cpu@0 version: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz serial: Not Supported by CPU slot: CPU size: 800MHz capacity: 800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=4 enabledcores=4 threads=8 *-cache:0 description: L1 cache physical id: 1a slot: L1-Cache size: 64KiB capacity: 64KiB capabilities: synchronous internal write-through data *-cache:1 description: L2 cache physical id: 1b slot: L2-Cache size: 256KiB capacity: 256KiB capabilities: synchronous internal write-through data *-cache:2 description: L3 cache physical id: 1c slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: synchronous internal write-back unified *-memory description: System Memory physical id: 1d slot: System board or motherboard size: 6GiB *-bank:0 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: M471B5273DH0-CH9 vendor: Samsung physical id: 0 serial: 450F1160 slot: ChannelA-DIMM0 size: 4GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 1 serial: 0CA0E8E2 slot: ChannelB-DIMM0 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-pci description: Host bridge product: 2nd Generation Core Processor Family DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 09 width: 32 bits clock: 33MHz *-pci:0 description: PCI bridge product: Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 09 width: 32 bits clock: 33MHz capabilities: pci pm msi pciexpress normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:3000(size=4096) memory:f0000000-f10fffff ioport:c0000000(size=301989888) *-generic UNCLAIMED description: Unassigned class product: Illegal Vendor ID vendor: Illegal Vendor ID physical id: 0 bus info: pci@0000:01:00.0 version: ff width: 32 bits clock: 66MHz capabilities: bus_master vga_palette cap_list configuration: latency=255 maxlatency=255 mingnt=255 resources: memory:f0000000-f0ffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:3000(size=128) memory:f1000000-f107ffff *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:52 memory:f1400000-f17fffff memory:e0000000-efffffff ioport:4000(size=64) *-communication description: Communication controller product: 6 Series/C200 Series Chipset Family MEI Controller #1 vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: driver=mei_me latency=0 resources: irq:50 memory:f1c05000-f1c0500f *-usb:0 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:16 memory:f1c09000-f1c093ff *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:53 memory:f1c00000-f1c03fff *-pci:1 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode cap_list configuration: driver=pcieport resources: irq:16 *-pci:2 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:17 memory:f1b00000-f1bfffff *-network description: Wireless interface product: Centrino Wireless-N 1030 [Rainbow Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 34 serial: bc:77:37:14:47:e5 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=iwlwifi driverversion=3.13.0-27-generic firmware=18.168.6.1 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:51 memory:f1b00000-f1b01fff *-pci:3 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 4 vendor: Intel Corporation physical id: 1c.3 bus info: pci@0000:00:1c.3 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:19 memory:f1a00000-f1afffff *-usb description: USB controller product: uPD720200 USB 3.0 Host Controller vendor: NEC Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi msix pciexpress xhci bus_master cap_list configuration: driver=xhci_hcd latency=0 resources: irq:19 memory:f1a00000-f1a01fff *-pci:4 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 5 vendor: Intel Corporation physical id: 1c.4 bus info: pci@0000:00:1c.4 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:16 memory:f1900000-f19fffff *-pci:5 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 6 vendor: Intel Corporation physical id: 1c.5 bus info: pci@0000:00:1c.5 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:17 ioport:2000(size=4096) ioport:f1800000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 06 serial: 14:fe:b5:a3:ac:40 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=172.19.167.151 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:49 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff *-usb:1 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:23 memory:f1c08000-f1c083ff *-isa description: ISA bridge product: HM67 Express Chipset Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 05 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: driver=lpc_ich latency=0 resources: irq:0 *-ide:0 description: IDE interface product: 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 05 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:40b8(size=8) ioport:40cc(size=4) ioport:40b0(size=8) ioport:40c8(size=4) ioport:4090(size=16) ioport:4080(size=16) *-serial UNCLAIMED description: SMBus product: 6 Series/C200 Series Chipset Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 05 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:f1c04000-f1c040ff ioport:efa0(size=32) *-ide:1 description: IDE interface product: 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller vendor: Intel Corporation physical id: 1f.5 bus info: pci@0000:00:1f.5 version: 05 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:40a8(size=8) ioport:40c4(size=4) ioport:40a0(size=8) ioport:40c0(size=4) ioport:4070(size=16) ioport:4060(size=16) *-scsi:0 physical id: 1 logical name: scsi0 capabilities: emulated *-disk description: ATA Disk product: SAMSUNG HN-M640M physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 2AR1 serial: S2T3J1KBC00006 size: 596GiB (640GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=6b746d91 *-volume:0 description: Windows NTFS volume physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: 3.1 serial: 0272-3e7f size: 348MiB capacity: 350MiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2013-09-18 12:20:45 filesystem=ntfs label=System Reserved modified_by_chkdsk=true mounted_on_nt4=true resize_log_file=true state=dirty upgrade_on_mount=true *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 116GiB capacity: 116GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 6037MiB capabilities: nofs *-logicalvolume:1 description: Linux filesystem partition physical id: 6 logical name: /dev/sda6 logical name: / capacity: 110GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered state=mounted *-volume:2 description: Windows NTFS volume physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 logical name: /media/os version: 3.1 serial: 4e7853ec-5555-a74d-82e0-9f49798d3772 size: 156GiB capacity: 156GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2013-09-19 09:19:00 filesystem=ntfs label=OS mount.fstype=fuseblk mount.options=ro,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 state=mounted *-volume:3 description: Windows NTFS volume physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 logical name: /media/data version: 3.1 serial: 7666d55f-e1bf-e645-9791-2a1a31b24b9a size: 322GiB capacity: 322GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2013-09-17 23:27:01 filesystem=ntfs label=Data modified_by_chkdsk=true mount.fstype=fuseblk mount.options=rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 mounted_on_nt4=true resize_log_file=true state=mounted upgrade_on_mount=true *-scsi:1 physical id: 2 logical name: scsi1 capabilities: emulated *-cdrom description: DVD-RAM writer product: DVD+-RW GT32N vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/sr0 version: A201 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-battery product: DELL vendor: SANYO physical id: 1 version: 2008 serial: 1.0 slot: Rear capacity: 57720mWh configuration: voltage=11.1V `

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Confusion for mime files: magic, magic.mgc, magic.mime

    - by Florence Foo
    I'm using Ubuntu. I'm trying to use ruby gem 'shared-mime-info' for an application I'm writing. I understand that magic.mgc is a compiled version of magic file which has magic number definitions for the different file types. BUT I don't understand why is it /usr/share/mime/magic is in binary format instead of just normal text file with each parameters separated by white space like everywhere else I'm finding on the internet when it's referencing this file? The /usr/share/mime/magic has the word 'MIME-Magic' at the beginning of the file and prioritize the rest of the stuff like. So it doesn't look like magic.mgc at all. [100:application/vnd.scribus] >1=^@^KSCRIBUSUTF8 [90:application/vnd.stardivision.writer] >2089=^@ shared-mime-info seems to want a magic file in the binary non compiled format as above and I wanted to add definition for DOCX but how does one update or generate this file without using a hex editor? There is a reference to the magic file I found at: http://standards.freedesktop.org/shared-mime-info-spec/shared-mime-info-spec-latest.html And it mention this file is updated with update-mime-database but what if I just want to add some new entry to it. hex editor? Anyway I ended up using hexer to make a new magic file in ~/.local/share/mime/ with only the entry I wanted to add and the MIME-Magic header. Seems to work (assuming I will ever deal with docx for now). 00000000: 4d 49 4d 45 2d 4d 61 67 69 63 00 0a 5b 36 30 3a MIME-Magic..[60: 00000010: 61 70 70 6c 69 63 61 74 69 6f 6e 2f 76 6e 64 2e application/vnd. 00000020: 6f 70 65 6e 78 6d 6c 66 6f 72 6d 61 74 73 2d 6f openxmlformats-o 00000030: 66 66 69 63 65 64 6f 63 75 6d 65 6e 74 2e 77 6f fficedocument.wo 00000040: 72 64 70 72 6f 63 65 73 73 69 6e 67 6d 6c 2e 64 rdprocessingml.d 00000050: 6f 63 75 6d 65 6e 74 5d 0a 3e 30 3d 00 08 50 4b ocument].>0=..PK 00000060: 03 04 14 00 06 00 0a -- -- -- -- -- -- -- -- -- .......---------

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Using mod_rewrite for a RESTful api

    - by razass
    Say the user is making a request to the following url: "http://api.example.com/houses/123/abc" That request needs to map to "/webroot/index.php" and 'houses', '123', 'abc' need to be able to be parsed out of the URL in that index.php. It also can't alter the http headers or body. There can be any number of variables after the domain ie) "http://api.example.com/houses/1234/abc/zxy/987" I think I already have all requests being sent to webroot using: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond $0 !^webroot/ RewriteRule .* webroot/$0 [L] </IfModule> Which appears to be working but I am not sure if it is correct. But now I am at a loss as to how to take the next step as mentioned above. Thanks in advance!

    Read the article

  • CRM@Oracle Series: Sales Pipeline Visibility

    - by tony.berk
    Can you see it? Should you see it all? Who else should see it? Can their manager see it? What is it? Today's slidecast discusses the popular topic of sales pipeline visibility within a CRM application and how Oracle implemented visibility in our global implementation of Siebel CRM. This post is next in the CRM@Oracle Series, which discusses Oracle's internal use of Oracle CRM products such as Siebel CRM and Oracle CRM On Demand. Oracle's requirements include a variety of different organizations, roles and responsibilities. Oracle's Applications IT CRM Systems team, responsible for deploying Siebel CRM within Oracle, implemented a number of creative solutions to address the requirements, and they are shared in the slidecast. CRM@Oracle - Sales Pipeline Visibility Click here to learn more about Oracle CRM products and here to learn about other customers using Oracle CRM products. We want to hear from you! If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, let us know and we'll get it on our list. In the meantime, enjoy today's update on the CRM@Oracle series.

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • Windows 7 KSOD On Login

    - by Brandon Bertelsen
    For those that are unaware, KSOD means blacK Screen of Death. Essentially, when windows starts my computer shows only the cursor and a black screen. It seems like any and all shell elements are disabled (or perhaps not started). I have seen a number of these questions asked, none of which have matched my situation. CTRL + ALT + ... does not respond Restarting in safe mode, results in the same KSOD sfc /scannow seems to have no effect when typed at the command prompt that is accessed using the recovery tools via the install disk Update to item 3: sfc /scannow reports: There is a system repair pending which requires reboot to complete. Restart Windows and run sfc again. However, Windows does not restart past KSOD. Update to item 3 as per Soandos comment re: /offbootdir sfc /scannow /offbotdir=e:\ /windir=e:\windows "Windows resource protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log..."

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Unix : `nc` on Ubuntu vs Redhat (netcat)

    - by bguiz
    How do I achieve the equivalent of nc -q in Redhat 5? Details? nc -q -1 local host ${PORT} ${CMD} In Ubuntu, nc may be use as above, with the -q option. See manpage. -q after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever. However, this option is not available in Redhat 5. See manpage. (Note: The Red Hat website is horrible to search, if someone finds their manpage for nc, please edit this post or comment with the new link).

    Read the article

  • Bump the Bill

    - by David Dorf
    I'm writing this from 3,400 feet in the air somewhere between Chicago and Austin. GoGo In-flight strikes again. Is there anywhere I can't get a WiFi connection? While listening to Deacon Blues by Steely Dan and skimming the news, I just came across an interesting article on mobile payments. Remember when I wrote about the iPhone Bump application and its possible use in retail? Well it looks like PayPal updated their mobile payments application to include the bump technology. Now its possible to transfer money between individuals by bumping iPhones. According to the WSJ, Paypal did 24 million transactions in 2008 and 140 million in 2009 on mobile phones. As the technology gets easier to use, that number is bound to increase. Alternatives to Paypal include Google Checkout, Amazon Payments, wireless carriers ("put it on my phone bill"), smart cards (using your phone's SIM card), and iTunes. That last one comes courtesy of a story Joe Skorupa wrote on mobile payments. It looks like Apple allows iPhone apps to take micro-payments via iTunes accounts, so there may come a time when its possible to use your iPhone to make a purchase in a retail store and have your credit card charged via your iTunes account. There are still some improvements in usability to be made before using a phone will be easier than swiping a credit card, but its already better than fussing with cash.

    Read the article

  • Pro BizTalk 2009

    - by Sean Feldman
    I have finished reading Pro BizTalk 2009 book from APress. This is a great book  if you’ve never dealt with BizTalk in the past and want to have a quick “on-ramp”. Although the book is very concerned about right way of building traditional BizTalk applications, it also dedicates a chapter to ESB Toolkit and does a good job in analyzing it. The fact that authors were concerned with subject such as coupling, hard-coding, automation, etc. makes it very interesting. One warning, if you are expecting to have a book that will guide you how to apply step by step examples, forget it. Nor this book does it, neither it’s possible due to multiple erratas found in it. I really was disappointed by the number of typos, inaccuracies, and technical mistakes. These kind of things turn readers away, especially when they had no experience with BT in the past. But this is the only bad thing about it. As for the rest – great content. Next week I am taking a deep dive course on BizTalk 2009. I really feel that this book has helped me a lot to get my feet wet. Next target will be ESB Tookit. To get to that, I will use what the book authors wrote “you have to understand how BizTalk as an engine works”.

    Read the article

  • More Mobile Payments

    - by David Dorf
    In the previous post I discussed the Bump Payments from PayPayl, but that's not the only innovative way to make purchases using your phone. Verizon recently announced a partnership with Danal that allows shoppers to charge online purchases to their Verizon bill. For e-commerce sites that accept this type of payment, it's a two step process. At checkout, the shopper enters their mobile number and billing zip code. Then a SMS message is sent to the mobile phone that contains a one-time code that must be entered on the e-commerce site. This two-factor authentication seems pretty secure, and no pre-registration or credit card is necessary. There's a $25 a month maximum, but I bet the limit gets raised as Verizon gets more comfortable with security. Merchants are charged a fee similar to credit card fees. Another example of mobile payments is offered by BlingNation. Customers attach a small NFC sticker to their phones that allows them to "tap" the POS device to make a payment. The NFC chip is connected to their checking account, so the transaction is treated as a debit payment. Text messages are sent to the mobile that confirm the payments so shoppers can easily verify their purchases. BlingNation is working with banks like Adirondack Trust Company and The State Bank of La Junta in Colorado. Heck, you can even send money to inmates in the Arkansas prison system using your mobile phone now that the state of Arkansas supports payments via their mobile website. Everyone is getting into the act now.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >