Search Results

Search found 3349 results on 134 pages for 'geometry conversion'.

Page 124/134 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • CodePlex Daily Summary for Monday, June 24, 2013

    CodePlex Daily Summary for Monday, June 24, 2013Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.43: changes NEW: Added Support for "ImageJumbo.com" links NEW: Added Support for "ImgTiger.com" links NEW: Added Support for "ImageDax.net" links NEW: Added Support for "HosterBin.com" links FIXED: "ImgServe.net" linksWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...CODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...Document.Editor: 2013.24: What's new for Document.Editor 2013.24: Improved Video Editing support Improved Link Editing support Minor Bug Fix's, improvements and speed upsExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4New ProjectsActivity Injector: Activity Injector is mainly used in unit testing workflow activities.Application Starter Tremens: Application for starting ApplicationaAudiwikiREF: This project is created to finish my referral assignment because i faced many problems with the previous one where i couldn't connect the project and commit. AVTS GLUCK ANTIVIRUS SYSTEM: AVTS Gluck AntiVirus System - the best antivirus made by young developers from Ukraine. It was writed in C#, C++, VB.NET, Excutable files, PHP DS, .and more....Base64 File Converter: File converting tool that converts any file into Base64 TXT file and vice versa just by drag-and-drop gesture.Basket Builders Umbraco bootstrap packages: Our umbraco packages to speed up project Bugzy Bug Tracking and Work Management: Creating Bug Tracking and Work Managment system while helping the JSON-RPC standard become as strong as the Bloated XML-RPC(SOAP)Dalmatian Build Script: Dalmatian allows you to automate your build using C# of VB.NETDécouverte Majeure MISN Chrono-courses: Chronocourses projectDocToolTip Library: DocToolTip library allows creating screenshots of winform forms with the name and type of component controls.DotaPick: SummaryHiUpdateTools - easy publish and update your app: HiUpdateTools is a easy tools to use publish new version of your application Kinect Skeleton Recorder: Records and shows XBox 360 Kinect skeleton data using windows forms. Saves the data as XML that can be read by other applications for use in animation.MARIT (BETA): MARIT is a simple drawing app for Android. More features are on the todo-list so stay tuned! Feedback and requests are appreciated. MvcToolbox: MVC Toolbox is a library which contains lots of usefull methods which will help developer to write less code and to be more productive.PrototypeRef: list of testing toolsSmartPlay: SmartPlaySolid Geometry Simulation: Implements a simple fur/hair dynamic engine using physBAM. This package includes a semi-implicit solver for hair simulation and plugins for maya/houdini.Windows.Forms.Controls Revisited (SSTA.WinForms): SearchableListBox control built on the Window,Form.ListBox renders a list box items with multiple search terms highlighted.YiLeSystem: This is a dining system.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • C# Open Source software that is useful for learning Design Patterns

    - by Fathom Savvy
    In college I took a class in Expert Systems. The language the book taught (CLIPS) was esoteric - Expert Systems: Principles and Programming, Fourth Edition. I remember having a tough time with it. So, after almost failing the class, I needed to create the most awesome Expert System for my final presentation. I chose to create an expert system that would calculate risk analysis for a person's retirement portfolio. In short, the system would provide the services normally performed by one's financial adviser. In other words, based on personality, age, state of the macro economy, and other factors, should one's portfolio be conservative, moderate, or aggressive? In the appendix of the book (or on the CD-ROM), there was this in-depth example program for something unrelated to my presentation. Over my break, I read and re-read every line of that program until I understood it to the letter. Even though it was unrelated, I learned more than I ever could by reading all of the chapters. My presentation turned out to be pretty damn good and I received praises from my professor and classmates. So, the moral of the story is..., by understanding other people's code, you can gain greater insight into a language/paradigm than by reading canonical examples. Still, to this day, I am having trouble with everyday design patterns such as the Factory Pattern. I would like to know if anyone could recommend open source software that would help me understand the Gang of Four design patterns, at the very least. I have read the books, but I'm having trouble writing code for the concepts in the real world. Perhaps, by studying code used in today's real world applications, it might just "click". I realize a piece of software may only implement one kind of design pattern. But, if the pattern is an implementation you think is good for learning, and you know what pattern to look for within the source, I'm hoping you can tell me about it. For example, the System.Linq.Expressions namespace has a good example of the Visitor Pattern. The client calls Expression.Accept(new ExpressionVisitor()), which calls ExpressionVisitor (VisitExtension), which calls back to Expression (VisitChildren), which then calls Expression (Accept) again - wooah, kinda convoluted. The point to note here is that VisitChildren is a virtual method. Both Expression and those classes derived from Expression can implement the VisitChildren method any way they want. This means that one type of Expression can run code that is completely different from another type of derived Expression, even though the ExpressionVisitor class is the same in the Accept method. (As a side note Expression.Accept is also virtual). In the end, the code provides a real world example that you won't get in any book because it's kinda confusing. To summarize, If you know of any open source software that uses a design pattern implementation you were impressed by, please list it here. I'm sure it will help many others besides just me. public class VisitorPatternTest { public void Main() { Expression normalExpr = new Expression(); normalExpr.Accept(new ExpressionVisitor()); Expression binExpr = new BinaryExpression(); binExpr.Accept(new ExpressionVisitor()); } } public class Expression { protected internal virtual Expression Accept(ExpressionVisitor visitor) { return visitor.VisitExtension(this); } protected internal virtual Expression VisitChildren(ExpressionVisitor visitor) { if (!this.CanReduce) { throw Error.MustBeReducible(); } return visitor.Visit(this.ReduceAndCheck()); } public virtual Expression Visit(Expression node) { if (node != null) { return node.Accept(this); } return null; } public Expression ReduceAndCheck() { if (!this.CanReduce) { throw Error.MustBeReducible(); } Expression expression = this.Reduce(); if ((expression == null) || (expression == this)) { throw Error.MustReduceToDifferent(); } if (!TypeUtils.AreReferenceAssignable(this.Type, expression.Type)) { throw Error.ReducedNotCompatible(); } return expression; } } public class BinaryExpression : Expression { protected internal override Expression Accept(ExpressionVisitor visitor) { return visitor.VisitBinary(this); } protected internal override Expression VisitChildren(ExpressionVisitor visitor) { return CreateDummyExpression(); } protected internal Expression CreateDummyExpression() { Expression dummy = new Expression(); return dummy; } } public class ExpressionVisitor { public virtual Expression Visit(Expression node) { if (node != null) { return node.Accept(this); } return null; } protected internal virtual Expression VisitExtension(Expression node) { return node.VisitChildren(this); } protected internal virtual Expression VisitBinary(BinaryExpression node) { return ValidateBinary(node, node.Update(this.Visit(node.Left), this.VisitAndConvert<LambdaExpression>(node.Conversion, "VisitBinary"), this.Visit(node.Right))); } }

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • IsNumeric() Broken? Only up to a point.

    - by Phil Factor
    In SQL Server, probably the best-known 'broken' function is poor ISNUMERIC() . The documentation says 'ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0. ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).'Although it will take numeric data types (No, I don't understand why either), its main use is supposed to be to test strings to make sure that you can convert them to whatever numeric datatype you are using (int, numeric, bigint, money, smallint, smallmoney, tinyint, float, decimal, or real). It wouldn't actually be of much use anyway, since each datatype has different rules. You actually need a RegEx to do a reasonably safe check. The other snag is that the IsNumeric() function  is a bit broken. SELECT ISNUMERIC(',')This cheerfully returns 1, since it believes that a comma is a currency symbol (not a thousands-separator) and you meant to say 0, in this strange currency.  However, SELECT ISNUMERIC(N'£')isn't recognized as currency.  '+' and  '-' is seen to be numeric, which is stretching it a bit. You'll see that what it allows isn't really broken except that it doesn't recognize Unicode currency symbols: It just tells you that one numeric type is likely to accept the string if you do an explicit conversion to it using the string. Both these work fine, so poor IsNumeric has to follow suit. SELECT  CAST('0E0' AS FLOAT)SELECT  CAST (',' AS MONEY) but it is harder to predict which data type will accept a '+' sign. SELECT  CAST ('+' AS money) --0.00SELECT  CAST ('+' AS INT)   --0SELECT  CAST ('+' AS numeric)/* Msg 8115, Level 16, State 6, Line 4 Arithmetic overflow error converting varchar to data type numeric.*/SELECT  CAST ('+' AS FLOAT)/*Msg 8114, Level 16, State 5, Line 5Error converting data type varchar to float.*/> So we can begin to say that the maybe IsNumeric isn't really broken, but is answering a silly question 'Is there some numeric datatype to which i can convert this string? Almost, but not quite. The bug is that it doesn't understand Unicode currency characters such as the euro or franc which are actually valid when used in the CAST function. (perhaps they're delaying fixing the euro bug just in case it isn't necessary).SELECT ISNUMERIC (N'?23.67') --0SELECT  CAST (N'?23.67' AS money) --23.67SELECT ISNUMERIC (N'£100.20') --1SELECT  CAST (N'£100.20' AS money) --100.20 Also the CAST function itself is quirky in that it cannot convert perfectly reasonable string-representations of integers into integersSELECT ISNUMERIC('200,000')       --1SELECT  CAST ('200,000' AS INT)   --0/*Msg 245, Level 16, State 1, Line 2Conversion failed when converting the varchar value '200,000' to data type int.*/  A more sensible question is 'Is this an integer or decimal number'. This cuts out a lot of the apparent quirkiness. We do this by the '+E0' trick. If we want to include floats in the check, we'll need to make it a bit more complicated. Here is a small test-rig. SELECT  PossibleNumber,         ISNUMERIC(CAST(PossibleNumber AS NVARCHAR(20)) + 'E+00') AS Hack,        ISNUMERIC (PossibleNumber + CASE WHEN PossibleNumber LIKE '%E%'                                          THEN '' ELSE 'E+00' END) AS Hackier,        ISNUMERIC(PossibleNumber) AS RawIsNumericFROM    (SELECT CAST(',' AS NVARCHAR(10)) AS PossibleNumber          UNION SELECT '£' UNION SELECT '.'         UNION SELECT '56' UNION SELECT '456.67890'         UNION SELECT '0E0' UNION SELECT '-'         UNION SELECT '-' UNION SELECT '.'         UNION  SELECT N'?' UNION SELECT N'¢'        UNION  SELECT N'?' UNION SELECT N'?34.56'         UNION SELECT '-345' UNION SELECT '3.332228E+09') AS examples Which gives the result ... PossibleNumber Hack Hackier RawIsNumeric-------------- ----------- ----------- ------------? 0 0 0- 0 0 1, 0 0 1. 0 0 1¢ 0 0 1£ 0 0 1? 0 0 0?34.56 0 0 00E0 0 1 13.332228E+09 0 1 1-345 1 1 1456.67890 1 1 156 1 1 1 I suspect that this is as far as you'll get before you abandon IsNumeric in favour of a regex. You can only get part of the way with the LIKE wildcards, because you cannot specify quantifiers. You'll need full-blown Regex strings like these ..[-+]?\b[0-9]+(\.[0-9]+)?\b #INT or REAL[-+]?\b[0-9]{1,3}\b #TINYINT[-+]?\b[0-9]{1,5}\b #SMALLINT.. but you'll get even these to fail to catch numbers out of range.So is IsNumeric() an out and out rogue function? Not really, I'd say, but then it would need a damned good lawyer.

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Rendering a WPF Network Map/Graph layout - Manual? PathListBox? Something Else?

    - by Ben Von Handorf
    I'm writing code to present the user with a simplified network map. At any given time, the map is focused on a specific item... say a router or a server. Based on the focused item, other network entities are grouped into sets (i.e. subnets or domains) and then rendered around the focused item. Lines would represent connections and groups would be visually grouped inside a rectangle or ellipse. Panning and zooming are required features. An item can be selected to display more information in a "properties" style window. An item could also be double-clicked to re-focus the entire network map on that item. At that point, the entire map would be re-calculated. I am using MVVM without any framework, as of yet. Assume the logic for grouping items and determining what should be shown or not is all in place. I'm looking for the best way to approach the UI layout. So far, I'm aware of the following options: Use a canvas for layout (inside a ScrollViewer to handle the panning). Have my ViewModel make use of a Layout Manager type of class, which would handle assigning all the layout properties (Top, Left, etc.). Bind my set of display items to an ItemsControl and use Data Templates to handle the actual rendering. The drawbacks with this approach: Highly manual layout on my part. Lots of calculation. I have to handle item selection manually. Computation of connecting lines is manual. The Pros of this approach: I can draw additional lines between child subnets as appropriate (manually). Additional LayoutManagers could be added later to render the display differently. This could probably be wrapped up into some sort of a GraphLayout control to be re-used. Present the focused item at the center of the display and then use a PathListBox for layout of the additional items. Have my ViewModel expose a simple list of things to be drawn and bind them to the PathListBox. Override the ListBoxItem Template to also create a line geometry from the borders of the focused item (tricky) to the bound item. Use DataTemplates to handle the case where the item being bound is a subnet, in which case we would use another PathListBox in the template to display items inside the subnet. The drawbacks with this approach: Selected Item synchronization across multiple `PathListBox`es. Only one item on the whole graph can be selected at a time, but each child PathListBox maintains its own selection. Also, subnets cannot be selected, but would be selectable without additional work. Drawing the connecting lines is going to be a bit of trickery in the ListBoxItem template, since I need to know the correct side of the focused item to connect to. The pros of this approach: I get to stay out of the layout business, more. I'm looking for any advice or thoughts from others who have encountered similar issues or who have more WPF experience than I. I'm using WPF 4, so any new tricks are legal and encouraged.

    Read the article

  • Java Fx Data bind not working with File Read

    - by rjha94
    Hi I am using a very simple JavaFx client to upload files. I read the file in chunks of 1 MB (using Byte Buffer) and upload using multi part POST to a PHP script. I want to update the progress bar of my client to show progress after each chunk is uploaded. The calculations for upload progress look correct but the progress bar is not updated. I am using bind keyword. I am no JavaFx expert and I am hoping some one can point out my mistake. I am writing this client to fix the issues posted here (http://stackoverflow.com/questions/2447837/upload-1gb-files-using-chunking-in-php) /* * Main.fx * * Created on Mar 16, 2010, 1:58:32 PM */ package webgloo; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.geometry.VPos; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.layout.HBox; import javafx.scene.layout.LayoutInfo; import javafx.scene.text.Font; import javafx.scene.control.ProgressBar; import java.io.FileInputStream; /** * @author rajeev jha */ var totalBytes:Float = 1; var bytesWritten:Float = 0; var progressUpload:Float; var uploadURI = "http://www.test1.com/test/receiver.php"; var postMax = 1024000 ; function uploadFile(inputFile: java.io.File) { totalBytes = inputFile.length(); bytesWritten = 1; println("To-Upload - {totalBytes}"); var is = new FileInputStream(inputFile); var fc = is.getChannel(); //1 MB byte buffer var chunkCount = 0; var bb = java.nio.ByteBuffer.allocate(postMax); while(fc.read(bb) >= 0){ println("loop:start"); bb.flip(); var limit = bb.limit(); var bytes = GigaFileUploader.getBufferBytes(bb.array(), limit); var content = GigaFileUploader.createPostContent(inputFile.getName(), bytes); GigaFileUploader.upload(uploadURI, content); bytesWritten = bytesWritten + limit ; progressUpload = 1.0 * bytesWritten / totalBytes ; println("Progress is - {progressUpload}"); chunkCount++; bb.clear(); println("loop:end"); } } var label = Label { font: Font { size: 12 } text: bind "Uploaded - {bytesWritten * 100 / (totalBytes)}%" layoutInfo: LayoutInfo { vpos: VPos.CENTER maxWidth: 120 minWidth: 120 width: 120 height: 30 } } def jFileChooser = new javax.swing.JFileChooser(); jFileChooser.setApproveButtonText("Upload"); var button = Button { text: "Upload" layoutInfo: LayoutInfo { width: 100 height: 30 } action: function () { var outputFile = jFileChooser.showOpenDialog(null); if (outputFile == javax.swing.JFileChooser.APPROVE_OPTION) { uploadFile(jFileChooser.getSelectedFile()); } } } var hBox = HBox { spacing: 10 content: [label, button] } var progressBar = ProgressBar { progress: bind progressUpload layoutInfo: LayoutInfo { width: 240 height: 30 } } var vBox = VBox { spacing: 10 content: [hBox, progressBar] layoutX: 10 layoutY: 10 } Stage { title: "Upload File" width: 270 height: 120 scene: Scene { content: [vBox] } resizable: false }

    Read the article

  • Deadlock problem

    - by DoomStone
    Hello i'm having a deadlock problem with the following code. It happens when i call the function getMap(). But i can't reealy see what can cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using System.Drawing.Imaging; using System.Threading; using AForge; using AForge.Imaging; using AForge.Imaging.Filters; using AForge.Imaging.Textures; using AForge.Math.Geometry; namespace CDIO.Library { public class Polygon { List<IntPoint> hull; public Polygon(List<IntPoint> hull) { this.hull = hull; } public bool inPoly(int x, int y) { int i, j = hull.Count - 1; bool oddNodes = false; for (i = 0; i < hull.Count; i++) { if (hull[i].Y < y && hull[j].Y >= y || hull[j].Y < y && hull[i].Y >= y) { try { if (hull[i].X + (y - hull[i].X) / (hull[j].X - hull[i].X) * (hull[j].X - hull[i].X) < x) { oddNodes = !oddNodes; } } catch (DivideByZeroException e) { if (0 < x) { oddNodes = !oddNodes; } } } j = i; } return oddNodes; } public Rectangle getRectangle() { int x = -1, y = -1, width = -1, height = -1; foreach (IntPoint item in hull) { if (item.X < x || x == -1) x = item.X; if (item.Y < y || y == -1) y = item.Y; if (item.X > width || width == -1) width = item.X; if (item.Y > height || height == -1) height = item.Y; } return new Rectangle(x, y, width-x, height-y); } public Point[] getMap() { List<Point> points = new List<Point>(); lock (hull) { Rectangle rect = getRectangle(); for (int x = rect.X; x <= rect.X + rect.Width; x++) { for (int y = rect.Y; y <= rect.Y + rect.Height; y++) { if (inPoly(x, y)) points.Add(new Point(x, y)); } } } return points.ToArray(); } public float calculateArea() { List<IntPoint> list = new List<IntPoint>(); list.AddRange(hull); list.Add(hull[0]); float area = 0.0f; for (int i = 0; i < hull.Count; i++) { area += list[i].X * list[i + 1].Y - list[i].Y * list[i + 1].X; } area = area / 2; if (area < 0) area = area * -1; return area; } } }

    Read the article

  • Imitating Mouse click - point with known coordinates on a fusion table layer - google maps

    - by Yavor Tashev
    I have been making a script using a fusion table's layer in google maps. I am using geocoder and get the coordinates of a point that I need. I put a script that changes the style of a polygon from the fusion table when you click on it, using the google.maps.event.addListener(layer, "click", function(e) {}); I would like to use the same function that I call in the case of a click on the layer, but this time with a click with the coordinates that I got. I have tried google.maps.event.trigger(map, 'click', {latLng: new google.maps.LatLng(42.701487,26.772308)}); As well as the example here Google Fusion Table Double Click (dblClick) I have tried changing map with layer... I am sorry if my question is quite stupid, but I have tried many options. P.S. I have seen many post about getting the info from the table, but I do not need that. I want to change the style of the KML element in the selected row, so I do not see it happening by a query. Here is the model of my script: function initialize() { geocoder = new google.maps.Geocoder(); map = new google.maps.Map(document.getElementById("map_canvas"),myOptions); layer = new google.maps.FusionTablesLayer({ suppressInfoWindows:true, map : map, query : { select: '??????????????', from: '12ZoroPjIfBR4J-XwM6Rex7LmfhzCDJc9_vyG5SM' } }); google.maps.event.addListener(layer, "click", function(e) { SmeniStilRaionni(layer,e); marker.setMap(null); }); } function SmeniStilRaionni(layer,e) { ... } function showAddress(address) { geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var point = results[0].geometry.location; //IMITATE THE CLICK } }); } In response to geocodezip This way you hide all the other elements... I do not wish that. It is like if I want to change the border of the selected element. And I do not wish for a new layer. In the function that I use now I push the style of the options of the layer and then set the option. I use the e from google.maps.event.addListener(layer, "click", function(e)); by inserting e.row['Name'].value inside the where rule. I would like to ask you if there is any info on the e variable in google.maps.event.addListener(layer, "click", function(e)); I found out how to get the results I wanted: For my query after I get the point I use this: var queryText ="SELECT '??????? ???','??????? ???','?????????? ???','??????????????' FROM "+FusionTableID+" WHERE ST_INTERSECTS(\'??????????????\', CIRCLE(LATLNG(" + point.toUrlValue(6) + "),0.5));"; queryText = encodeURIComponent(queryText); document.getElementById("vij query").innerHTML = queryText; var query = new google.visualization.Query('http://www.google.com/fusiontables/gvizdata?tq=' + queryText); And then I get these results: var rsyd = response.getDataTable().getValue(0,0); var osyd = response.getDataTable().getValue(0,1); var apsyd = response.getDataTable().getValue(0,2); And then, I use the following: where: "'??????? ???' = '"+rsyd+"'", Which is the same as: where: "'??????? ???' = '"+e.row['??????? ???'].value+"'", in the click function. This is a working solution for my problem. But still, I cannot find a way to Imitate a Mouse click.

    Read the article

  • Algorithm for dragging objects on a fixed grid

    - by FlyingStreudel
    Hello, I am working on a program for the mapping and playing of the popular tabletop game D&D :D Right now I am working on getting the basic functionality like dragging UI elements around, snapping to the grid and checking for collisions. Right now every object when released from the mouse immediately snaps to the nearest grid point. This causes an issue when something like a player object snaps to a grid point that has a wall -or other- adjacent. So essentially when the player is dropped they wind up with some of the wall covering them. This is fine and working as intended, however the problem is that now my collision detection is tripped whenever you try to move this player because its sitting underneath a wall and because of this you cant drag the player anymore. Here is the relevant code: void UIObj_MouseMove(object sender, MouseEventArgs e) { blocked = false; if (dragging) { foreach (UIElement o in ((Floor)Parent).Children) { if (o.GetType() != GetType() && o.GetType().BaseType == typeof(UIObj) && Math.Sqrt(Math.Pow(((UIObj)o).cX - cX, 2) + Math.Pow(((UIObj)o).cY - cY, 2)) < Math.Max(r.Height + ((UIObj)o).r.Height, r.Width + ((UIObj)o).r.Width)) { double Y = e.GetPosition((Floor)Parent).Y; double X = e.GetPosition((Floor)Parent).X; Geometry newRect = new RectangleGeometry(new Rect(Margin.Left + (X - prevX), Margin.Top + (Y - prevY), Margin.Right + (X - prevX), Margin.Bottom + (Y - prevY))); GeometryHitTestParameters ghtp = new GeometryHitTestParameters(newRect); VisualTreeHelper.HitTest(o, null, new HitTestResultCallback(MyHitTestResultCallback), ghtp); } } if (!blocked) { Margin = new Thickness(Margin.Left + (e.GetPosition((Floor)Parent).X - prevX), Margin.Top + (e.GetPosition((Floor)Parent).Y - prevY), Margin.Right + (e.GetPosition((Floor)Parent).X - prevX), Margin.Bottom + (e.GetPosition((Floor)Parent).Y - prevY)); InvalidateVisual(); } prevX = e.GetPosition((Floor)Parent).X; prevY = e.GetPosition((Floor)Parent).Y; cX = Margin.Left + r.Width / 2; cY = Margin.Top + r.Height / 2; } } internal virtual void SnapToGrid() { double xPos = Margin.Left; double yPos = Margin.Top; double xMarg = xPos % ((Floor)Parent).cellDim; double yMarg = yPos % ((Floor)Parent).cellDim; if (xMarg < ((Floor)Parent).cellDim / 2) { if (yMarg < ((Floor)Parent).cellDim / 2) { Margin = new Thickness(xPos - xMarg, yPos - yMarg, xPos - xMarg + r.Width, yPos - yMarg + r.Height); } else { Margin = new Thickness(xPos - xMarg, yPos - yMarg + ((Floor)Parent).cellDim, xPos - xMarg + r.Width, yPos - yMarg + ((Floor)Parent).cellDim + r.Height); } } else { if (yMarg < ((Floor)Parent).cellDim / 2) { Margin = new Thickness(xPos - xMarg + ((Floor)Parent).cellDim, yPos - yMarg, xPos - xMarg + ((Floor)Parent).cellDim + r.Width, yPos - yMarg + r.Height); } else { Margin = new Thickness(xPos - xMarg + ((Floor)Parent).cellDim, yPos - yMarg + ((Floor)Parent).cellDim, xPos - xMarg + ((Floor)Parent).cellDim + r.Width, yPos - yMarg + ((Floor)Parent).cellDim + r.Height); } } } Essentially I am looking for a simple way to modify the existing code to allow the movement of a UI element that has another one sitting on top of it. Thanks!

    Read the article

  • How to draw complex shape from code behind for custom control in resource dictionary

    - by HopelessCoder
    Hi I am new to wpf and am having a problem which may or may not be trivial. I have defined a custom control as follows in the resource dictionary: <ResourceDictionary x:Class="SyringeSlider.Themes.Generic" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:SyringeSlider"> <Style TargetType="{x:Type local:CustomControl1}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:CustomControl1}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> <Canvas Height="{TemplateBinding Height}" Width="{TemplateBinding Width}" Name="syringeCanvas"> </Canvas> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> Unfortunately I cannot go beyond this because I would like to draw a geometry onto the canvas consisting of a set of multiple line geometries whose dimensions are calculated as a function of the space available in the canvas. I believe that I need a code behind method to do this, but have not been able to determine how to link the xaml definition to a code behind method. Note that I have set up a class x:Class="SyringeSlider.Themes.Generic" for specifically this purpose, but can't figure out which Canvas property to link the drawing method to. My drawing method looks like this private void CalculateSyringe() { int adjHeight = (int) Height - 1; int adjWidth = (int) Width - 1; // Calculate some very useful values based on the chart above. int borderOffset = (int)Math.Floor(m_borderWidth / 2.0f); int flangeLength = (int)(adjHeight * .05f); int barrelLeftCol = (int)(adjWidth * .10f); int barrelLength = (int)(adjHeight * .80); int barrelRightCol = adjWidth - barrelLeftCol; int coneLength = (int)(adjHeight * .10); int tipLeftCol = (int)(adjWidth * .45); int tipRightCol = adjWidth - tipLeftCol; int tipBotCol = adjWidth - borderOffset; Path mySyringePath = new Path(); PathGeometry mySyringeGeometry = new PathGeometry(); PathFigure mySyringeFigure = new PathFigure(); mySyringeFigure.StartPoint = new Point(0, 0); Point pointA = new Point(0, flangeLength); mySyringeFigure.Segments.Add(new LineSegment(pointA, true)); Point pointB = new Point(); pointB.Y = pointA.Y + barrelLength; pointB.X = 0; mySyringeFigure.Segments.Add(new LineSegment(pointB, true)); // You get the idea....Add more points in this way mySyringeGeometry.Figures.Add(mySyringeFigure); mySyringePath.Data = mySyringeGeometry; } SO my question is: 1) Does what I am trying to do make any sense? 2) Can a use a canvas for this purpose? If not, what are my other options? Thanks!

    Read the article

  • QUiLoader and ignored dynamic properties

    - by Googie
    I'm loading the .ui file, where one of the widgets (QComboBox) has a dynamic property (http://qt-project.org/doc/qt-5/properties.html#dynamic-properties). The UI file looks like this: <?xml version="1.0" encoding="UTF-8"?> <ui version="4.0"> <class>PopulateScriptConfig</class> <widget class="QWidget" name="PopulateScriptConfig"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>400</width> <height>300</height> </rect> </property> <property name="windowTitle"> <string>Form</string> </property> <layout class="QVBoxLayout" name="verticalLayout"> <item> <widget class="QGroupBox" name="langGroup"> <property name="title"> <string>Language</string> </property> <layout class="QVBoxLayout" name="verticalLayout_3"> <item> <widget class="QComboBox" name="langCombo"> <property name="ScriptingLangCombo" stdset="0"> <bool>true</bool> </property> </widget> </item> </layout> </widget> </item> <item> <widget class="QGroupBox" name="codeGroup"> <property name="title"> <string>Implementation</string> </property> <layout class="QVBoxLayout" name="verticalLayout_2"> <item> <widget class="QPlainTextEdit" name="codeEdit"/> </item> </layout> </widget> </item> </layout> </widget> <resources/> <connections/> </ui> The important part is: <widget class="QComboBox" name="langCombo"> <property name="ScriptingLangCombo" stdset="0"> <bool>true</bool> </property> </widget> I'm loading the file with QUiLoader::load(). Note, that I have extended the QUiLoader class, but only to access createWidget() method, where I can query each widget like this: QWidget* UiLoader::createWidget(const QString& className, QWidget* parent, const QString& name) { QWidget* w = QUiLoader::createWidget(className, parent, name); qDebug() << w->dynamicPropertyNames(); return w; } As a result I see empty list displayed, so it seems like the dynamic property is completly ignored. Any ideas? P.S. I've made sure that I load correct file. 3 times.

    Read the article

  • Deploying MVC2 application to IIS7.5 - Ninject asked to provide controllers for content files

    - by Rune Jacobsen
    I have an application that started life as an MVC (1.0) app in Visual Studio 2008 Sp1 with a bunch of Silverlight 3 projects as part of the site. Nothing fancy at all. Using Ninject for dependency injection (first version 2 beta, now the released version 2 with the MVC extensions). With the release of .Net 4.0, VS2010, MVC2 etc., we decided to move the application to the newest platform. The conversion wizard in VS2010 apparently took care of everything, with one exception - it didn't change references to mvc1 to now point to mvc2, so I had to do that manually. Of course, this makes me think about other MVC2 things that could be missing from my app, that would be there if I did File - New Project... But that is not the focus of this question. When I deploy this application to the IIS 7.5 server (running on Win2008 R2 x64), the application as such works. However, images, scripts and other static content doesn't seem to exist. Of course they are there on disk on the server, but they don't show up in the client web browser. I am fairly new to IIS, so the only trick I knew is to try to open the web page in a browser on the server, as that could give me more information. And here, finally, we meet our enemy. If I try to go directly to the URL of one of the images (http://server/Content/someimage.jpg for instance), I get the following error in the browser: The IControllerFactory 'Ninject.Web.Mvc.NinjectControllerFactory' did not return a controller for a controller named 'Content'. Aha. The web server tries to feed this request to MVC, who with its' default routing setup assumes Content to be a controller, and fails. How can I get it to treat Content/ and Scripts/ (among others) as non-controllers and just pass through the static content? This of course works with Cassini on my developer machine, but as soon as I deploy, this problem hits. I am using the last version of Ninject MVC 2 where the IoC tool should pass missing controllers to the base controller factory, but this has apparently not helped. I have also tried to add ignore routes for Content etc., but this apparently has no effect either. I am not even sure I am addressing the problem on the right level. Does anyone know where to look to get this app going? I have full control of the web server so I can more or less do whatever I want to it, as long as it starts working. Thanks!

    Read the article

  • Xuggle codec identification fail

    - by Thiago
    Hi there, I'm trying to run the following Xuggle code: public static boolean convert(String stream) throws IOException, InterruptedException { IMediaReader reader = ToolFactory.makeReader(stream + ".flv"); IMediaWriter writer = ToolFactory.makeWriter(stream + ".mp3", reader); reader.addListener(writer); while (reader.readPacket() != null) ; return true; } Where stream is the file path. But I'm getting the following exception: java.lang.IllegalArgumentException: could not find input codec id at com.xuggle.xuggler.IContainerFormat.establishOutputCodecId(IContainerFormat.java:393) at com.xuggle.xuggler.IContainerFormat.establishOutputCodecId(IContainerFormat.java:327) at com.xuggle.xuggler.IContainerFormat.establishOutputCodecId(IContainerFormat.java:300) at com.xuggle.mediatool.MediaWriter.addStreamFromContainer(MediaWriter.java:1141) at com.xuggle.mediatool.MediaWriter.getStream(MediaWriter.java:1046) at com.xuggle.mediatool.MediaWriter.encodeAudio(MediaWriter.java:837) at com.xuggle.mediatool.MediaWriter.onAudioSamples(MediaWriter.java:1448) at com.xuggle.mediatool.AMediaToolMixin.onAudioSamples(AMediaToolMixin.java:89) at com.xuggle.mediatool.MediaReader.dispatchAudioSamples(MediaReader.java:628) at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:555) at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469) at <package_name>MediaFile.convert(MediaFile.java:66) at <package_name>.MediaFileTest.shouldConvertExistingFLV(MediaFileTest.java:32) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) The unit test is the following: @Test public void shouldConvertExistingFLV() throws IOException, InterruptedException { String str = "C:\Program Files\Wowza Media Systems\Wowza Media Server 2\content\Extremists"; //"c:/temp/corneta.flv" boolean result = MediaFile.convert(str); assertTrue(result); } So, why can't he find the codec, since I'm testing for a simple flv to mp3 conversion?

    Read the article

  • F#: Advantages of converting top-level functions to member methods?

    - by J Cooper
    Earlier I requested some feedback on my first F# project. Before closing the question because the scope was too large, someone was kind enough to look it over and leave some feedback. One of the things they mentioned was pointing out that I had a number of regular functions that could be converted to be methods on my datatypes. Dutifully I went through changing things like let getDecisions hand = let (/=/) card1 card2 = matchValue card1 = matchValue card2 let canSplit() = let isPair() = match hand.Cards with | card1 :: card2 :: [] when card1 /=/ card2 -> true | _ -> false not (hasState Splitting hand) && isPair() let decisions = [Hit; Stand] let split = if canSplit() then [Split] else [] let doubleDown = if hasState Initial hand then [DoubleDown] else [] decisions @ split @ doubleDown to this: type Hand // ...stuff... member hand.GetDecisions = let (/=/) (c1 : Card) (c2 : Card) = c1.MatchValue = c2.MatchValue let canSplit() = let isPair() = match hand.Cards with | card1 :: card2 :: [] when card1 /=/ card2 -> true | _ -> false not (hand.HasState Splitting) && isPair() let decisions = [Hit; Stand] let split = if canSplit() then [Split] else [] let doubleDown = if hand.HasState Initial then [DoubleDown] else [] decisions @ split @ doubleDown Now, I don't doubt I'm an idiot, but other than (I'm guessing) making C# interop easier, what did that gain me? Specifically, I found a couple *dis*advantages, not counting the extra work of conversion (which I won't count, since I could have done it this way in the first place, I suppose, although that would have made using F# Interactive more of a pain). For one thing, I'm now no longer able to work with function "pipelining" easily. I had to go and change some |> chained |> calls to (some |> chained).Calls etc. Also, it seemed to make my type system dumber--whereas with my original version, my program needed no type annotations, after converting largely to member methods, I got a bunch of errors about lookups being indeterminate at that point, and I had to go and add type annotations (an example of this is in the (/=/) above). I hope I haven't come off too dubious, as I appreciate the advice I received, and writing idiomatic code is important to me. I'm just curious why the idiom is the way it is :) Thanks!

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >