Search Results

Search found 8446 results on 338 pages for 'dynamic cast'.

Page 229/338 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Are you cashing in on the MVP complimentary subscriptions ?

    - by Tarun Arora
    The two most asked questions in the Microsoft technology communities around the Microsoft MVP program are, 1. How do I become a Microsoft MVP? 2. What benefits do I get as an MVP? The answer to the first question has been well answered here. In this blog post, I’ll try and answer the second question.           Please find a comprehensive list of Not for Resale personal subscriptions of various products that Microsoft MVP’s are eligible for Product Description Details JetBrains Resharper, dotTrace, dotCover & WebStorm  https://www.jetbrains.com/resharper/buy/mvp.html RedGate Sql server development, database administration, .net development, azure development (merged with Cerebrata), mySQL development, Oracle development http://www.red-gate.com/community/mvp-program Pluralsight Pluralsight on demand training http://blog.pluralsight.com/2011/02/28/pluralsight-for-mvp/ Cerebrata Cloud storage studio and Azure Diagnostic Manager (part of redgate now) https://www.cerebrata.com/Offers/mvp.aspx Telerik Telerik Ultimate collection & Telerik TeamPulse http://blogs.telerik.com/blogs/posts/11-03-01/telerik-gift-for-microsoft-mvps.aspx Developer Express DevEx controls http://www.devexpress.com/Home/Community/mvp.xml InnerWorking 600 hours of .net training catalogue http://www.innerworkings.com/mvp Typemock Typemock Isolator, Typemock Isolator for Sharepoint developers, Typemock Isolator for web developers, TestDriven.NET http://www.typemock.com/mvp SpeakFlow A suite of tools for creating, managing, and delivering non-linear presentations http://www.speakflow.com/ TechSmith Camtasia Studio, SnagIt, screen cast http://www.techsmith.com/camtasia.html Altova Altova XML spy http://www.altova.com/xml-editor/ Visual SVN VisualSVN Subversion integration plug-in for Visual Studio http://www.visualsvn.com/visualsvn/purchase/mvp/ PreEmptive Solution Professional PreEmptive Analytics, Dotfuscator http://www.preemptive.com/landing/mvp Armadillo Armadillo Adaptive Bug Prevention http://www.armadilloverdrive.com/ IS Decisions NFR license to Userlock, RemoteExec, FileAudit & WinReporter http://www.isdecisions.com/download/mvp-mct-program.htm Idera SQL tools http://www.idera.com/Content/Home.aspx West Wind Help Builder Help builder solution http://www.west-wind.com/weblog/posts/2005/Mar/09/Are-you-a-Microsoft-MVP-Get-a-FREE-copy-of-West-Wind-Html-Help-Builder Bamboo Sharepoint tools http://community.bamboosolutions.com/blogs/partner-advantage-program/archive/2008/08/01/partner-advantage-program-mvp.aspx Nitriq Nitriq code analysis http://blog.nitriq.com/FreeLicensesForMicrosoftMVPs.aspx ByteScout Components, Libraries and Developer Tools http://bytescout.com/buy/purchase_nfr_for_mvp.html YourKit Java and .net Profiler http://yourkit.com/.net/profiler/index.jsp Aspose .NET components http://www.aspose.com/corporate/community/2012_05_08_nfr-licenses-for-community-leaders.aspx Apart from google bing fu; stackoverflow and breathtech were a great help in compiling the above list. If you know of any other benefits, offers or complimentary subscriptions on offer for MVPs not cover in the list above, please add to the comment thread and I’ll have it updated in the list. Enjoy

    Read the article

  • SQL SERVER – Fix Error: Microsoft OLE DB Provider for SQL Server error ’80040e07' or Microsoft SQL Native Client error ’80040e07'

    - by pinaldave
    I quite often receive questions where users are looking for solution to following error: Microsoft OLE DB Provider for SQL Server error ’80040e07′ Syntax error converting datetime from character string. OR Microsoft SQL Native Client error ’80040e07′ Syntax error converting datetime from character string. If you have ever faced above error – I have a very simple solution for you. The solution is being very check date which is inserted in the datetime column. This error often comes up when application or user is attempting to enter an incorrect date into the datetime field. Here is one of the examples – one of the reader was using classing ASP Application with OLE DB provider for SQL Server. When he tried to insert following script he faced above mentioned error. INSERT INTO TestTable (ID, MyDate) VALUES (1, '01-Septeber-2013') The reason for the error was simple as he had misspelled September word. Upon correction of the word, he was able to successfully insert the value and error was not there. Incorrect values or the typo’s are not the only reason for this error. There can be issues with cast or convert as well. If you try to attempt following code using SQL Native Client or in your application you will also get similar errors. SELECT CONVERT (datetime, '01-Septeber-2013', 112) The reason here is very simple, any conversion attempt or any other kind of operation on incorrect date/time string can lead to the above error. If you not using embeded dynamic code in your application language but using attempting similar operation on incorrect datetime string you will get following error. Msg 241, Level 16, State 1, Line 2 Conversion failed when converting date and/or time from character string. Remember: Check your values of the string when you are attempting to convert them to string – either there can be incorrect values or they may be incorrectly formatted. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • Built-in card-reader doesn't work. HP Compaq nx6325 notebook

    - by user10940
    I have a HP-Compaq nx6325 notebook with an built-in card-reader (SD, MS/Pro, MMC, SM, XD) and the ubuntu (10.10.) don't see it. I've tried to install it manually, with this steps (and with this tifmxx driver), but doesn't work. The compile log: $ echo /home/tvera/downloads/cr_install /home/tvera/downloads/cr_install $ make -C /lib/modules/2.6.35-25-generic/build M=/home/tvera/downloads/cr_install make[1]: Entering directory `/usr/src/linux-headers-2.6.35-25-generic' CC [M] /home/tvera/downloads/cr_install/tifm_core.o In file included from /home/tvera/downloads/cr_install/tifm_core.c:12: /home/tvera/downloads/cr_install/linux/tifm.h:128: error: field ‘cdev’ has incomplete type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_uevent’: /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 1 of ‘add_uevent_var’ from incompatible pointer type include/linux/kobject.h:244: note: expected ‘struct kobj_uevent_env *’ but argument is of type ‘char **’ /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 2 of ‘add_uevent_var’ makes pointer from integer without a cast include/linux/kobject.h:244: note: expected ‘const char *’ but argument is of type ‘int’ /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:161: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free’: /home/tvera/downloads/cr_install/tifm_core.c:170: warning: type defaults to ‘int’ in declaration of ‘__mptr’ /home/tvera/downloads/cr_install/tifm_core.c:170: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:177: error: unknown field ‘release’ specified in initializer /home/tvera/downloads/cr_install/tifm_core.c:178: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:190: error: implicit declaration of function ‘class_device_initialize’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_add_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:211: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) /home/tvera/downloads/cr_install/tifm_core.c:211: error: (Each undeclared identifier is reported only once /home/tvera/downloads/cr_install/tifm_core.c:211: error: for each function it appears in.) /home/tvera/downloads/cr_install/tifm_core.c:212: error: implicit declaration of function ‘class_device_add’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_remove_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:237: error: implicit declaration of function ‘class_device_del’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:243: error: implicit declaration of function ‘class_device_put’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_device’: /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘struct device’ has no member named ‘bus_id’ /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) make[2]: *** [/home/tvera/downloads/cr_install/tifm_core.o] Error 1 make[1]: *** [_module_/home/tvera/downloads/cr_install] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-25-generic' make: *** [all] Error 2 The output of lsusb: Bus 001 Device 005: ID 05e3:0702 Genesys Logic, Inc. USB 2.0 IDE Adapter Bus 003 Device 003: ID 0458:003a KYE Systems Corp. (Mouse Systems) NetScroll+ Mini Traveler Bus 003 Device 002: ID 08ff:2580 AuthenTec, Inc. AES2501 Fingerprint Sensor Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

  • Subterranean IL: Filter exception handlers

    - by Simon Cooper
    Filter handlers are the second type of exception handler that aren't accessible from C#. Unlike the other handler types, which have defined conditions for when the handlers execute, filter lets you use custom logic to determine whether the handler should be run. However, similar to a catch block, the filter block does not get run if control flow exits the block without throwing an exception. Introducing filter blocks An example of a filter block in IL is the following: .try { // try block } filter { // filter block endfilter }{ // filter handler } or, in v1 syntax, TryStart: // try block TryEnd: FilterStart: // filter block HandlerStart: // filter handler HandlerEnd: .try TryStart to TryEnd filter FilterStart handler HandlerStart to HandlerEnd In the v1 syntax there is no end label specified for the filter block. This is because the filter block must come immediately before the filter handler; the end of the filter block is the start of the filter handler. The filter block indicates to the CLR whether the filter handler should be executed using a boolean value on the stack when the endfilter instruction is run; true/non-zero if it is to be executed, false/zero if it isn't. At the start of the filter block, and the corresponding filter handler, a reference to the exception thrown is pushed onto the stack as a raw object (you have to manually cast to System.Exception). The allowed IL inside a filter block is tightly controlled; you aren't allowed branches outside the block, rethrow instructions, and other exception handling clauses. You can, however, use call and callvirt instructions to call other methods. Filter block logic To demonstrate filter block logic, in this example I'm filtering on whether there's a particular key in the Data dictionary of the thrown exception: .try { // try block } filter { // Filter starts with exception object on stack // C# code: ((Exception)e).Data.Contains("MyExceptionDataKey") // only execute handler if Contains returns true castclass [mscorlib]System.Exception callvirt instance class [mscorlib]System.Collections.IDictionary [mscorlib]System.Exception::get_Data() ldstr "MyExceptionDataKey" callvirt instance bool [mscorlib]System.Collections.IDictionary::Contains(object) endfilter }{ // filter handler // Also starts off with exception object on stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) } Conclusion Filter exception handlers are another exception handler type that isn't accessible from C#, however, just like fault handlers, the behaviour can be replicated using a normal catch block: try { // try block } catch (Exception e) { if (!FilterLogic(e)) throw; // handler logic } So, it's not that great a loss, but it's still annoying that this functionality isn't directly accessible. Well, every feature starts off with minus 100 points, so it's understandable why something like this didn't make it into the C# compiler ahead of a different feature.

    Read the article

  • SQL SERVER – Various Leap Year Logics

    - by pinaldave
    Earlier I wrote one article on Leap Year and created one video about Leap Year. My point of view was to demonstrate how we can use SQL Server 2012 features to identify Leap year. How ever during the conversation I had some really good conversation. Here are updates for those who have missed reading the excellent comments on the blog. Incorrect Logic There are so many people still think Leap Year is the event which is consistently happening at every four year and the way to find it is divide the year with 4 and if the remainder is 0. That year is leap year. Well, it is not correct. Comment by David Bridge Check out this excerpt from wikipedia page http://en.wikipedia.org/wiki/Leap_year “most years that are evenly divisible by 4 are leap years…” “…Some exceptions to this rule are required since the duration of a solar year is slightly less than 365.25 days. Years that are evenly divisible by 100 are not leap years, unless they are also evenly divisible by 400, in which case they are leap years. For example, 1600 and 2000 were leap years, but 1700, 1800 and 1900 were not. Similarly, 2100, 2200, 2300, 2500, 2600, 2700, 2900 and 3000 will not be leap years, but 2400 and 2800 will be.” If you use logic of divide by 4 and remainder is 0 to find leap year, you will may end up with inaccurate result. The correct way to identify the year is to figure out the days of February and if the count is 29, the year is for sure leap year. Valid Alternate Solutions Comment by sainswor99insworth IIF((@Year%4=0 AND @Year%100 != 0) OR @Year%400=0, 1,0) Comment by Madhivanan Madhivanan has written a blog post about an year ago where he listed multiple ways to find leap year. Comment by Jayan DECLARE @year INT SET @year = 2012 IF (((@year % 4 = 0) AND (@year % 100 != 0)) OR (@year % 400 = 0)) PRINT ’1' ELSE print ’0' Comment by David DECLARE @Year INT = 2012 SELECT ISDATE('2/29/' + CAST(@Year AS CHAR(4))) Comment by David Bridge Incidentally – Another approach would be to take one day off March 1st and see if it is 29. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cocos3d lighting problem

    - by Parasithe
    I'm currently working on a cocos3d project, but I'm having some trouble with lighting and I have no idea how to solve it. I've tried everything and the lighting is always as bad in the game. The first picture is from 3ds max (the software we used for 3d) and the second is from my iphone app. http://prntscr.com/ly378 http://prntscr.com/ly2io As you can see, the lighting is really bad in the app. I manually add my spots and the ambiant light. Here is all my lighting code : _spot = [CC3Light lightWithName: @"Spot" withLightIndex: 0]; // Set the ambient scene lighting. ccColor4F ambientColor = { 0.9, 0.9, 0.9, 1 }; self.ambientLight = ambientColor; //Positioning _spot.target = [self getNodeNamed:kCharacterName]; _spot.location = cc3v( 400, 400, -600 ); // Adjust the relative ambient and diffuse lighting of the main light to // improve realisim, particularly on shadow effects. _spot.diffuseColor = CCC4FMake(0.8, 0.8, 0.8, 1.0); _spot.specularColor = CCC4FMake(0, 0, 0, 1); [_spot setAttenuationCoefficients:CC3AttenuationCoefficientsMake(0, 0, 1)]; // Another mechansim for adjusting shadow intensities is shadowIntensityFactor. // For better effect, set here to a value less than one to lighten the shadows // cast by the main light. _spot.shadowIntensityFactor = 0.75; [self addChild:_spot]; _spot2 = [CC3Light lightWithName: @"Spot2" withLightIndex: 1]; //Positioning _spot2.target = [self getNodeNamed:kCharacterName]; _spot2.location = cc3v( -550, 400, -800 ); _spot2.diffuseColor = CCC4FMake(0.8, 0.8, 0.8, 1.0); _spot2.specularColor = CCC4FMake(0, 0, 0, 1); [_spot2 setAttenuationCoefficients:CC3AttenuationCoefficientsMake(0, 0, 1)]; _spot2.shadowIntensityFactor = 0.75; [self addChild:_spot2]; I'd really appreciate if anyone would have some tip on how to fix the lighting. Maybe my spots are bad? maybe it's the material? I really have no idea. Any help would be welcomed. I already ask some help on cocos2d forums. I had some answers but I need more help.

    Read the article

  • PeopleSoft HCM @ OHUG 11: Enter the Matrix

    - by Jay Zuckert
    The PeopleSoft HCM team is back from a very busy and exciting OHUG conference in Orlando. The packed, standing-room only PeopleSoft HCM Roadmap keynote was the highlight of the conference for many attendees and the reviews are in : PeopleSoft rocked the house ! Great demonstration of products in the keynote. Best keynote in a long time, and fun. Engaging and entertaining, great demonstration of capabilities. Message received loud and clear, PeopleSoft applications are here to stay.  PeopleSoft has a real vision moving forward. Real-time polls using mobile texting were cutting edge.                          Tracy Martin (as Trinity) and other members of the PeopleSoft HCM team presented a ‘must-see’ Matrix-themed session while dressed as movie characters. The keynote highlighted planned HCM capabilities for Matrix administration and future organization visualization enhancements. The team also previewed the planned Manager Dashboard and Talent Summary.                           Following the keynote, some of the cast posed for photo opportunities at the OHUG booth in the exhibition hall. As you can imagine, they received some interesting looks walking by the other vendor booths. The PeopleSoft HCM team also presented numerous other OHUG sessions covering PeopleSoft Talent Management, Compensation, HR HelpDesk, Payroll, Global HCM Practices, Time & Labor, Absence Management, and Benefits. All of those presentations are available from the OHUG site at www.ohug.org. When not in one of the well-attended PeopleSoft HCM sessions, conference attendees filled the Oracle booth in the exhibition hall to see live product demonstrations. True to their PeopleSoft roots, some of the PeopleSoft HCM team played as hard as they worked in Orlando and enjoyed the OHUG Appreciation event along with customers at the Hard Rock. We are already busy planning for Oracle OpenWorld 2011 and prepping sessions our PeopleSoft HCM customers are sure to like. We hope to see you there in San Francisco from Oct. 2-6. To learn more about OpenWorld or to register, click here.

    Read the article

  • SQL SERVER – Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1

    - by pinaldave
    Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Let us recap the puzzles here quickly. Puzzle 1: Why following code when executed in SSMS displays result as a * (Star)? SELECT CAST(634 AS VARCHAR(2)) Puzzle 2: Write the shortest code that produces results as 1 without using any numbers in the select statement. Bonus Q: How many different Operating System (OS) NuoDB support? As I mentioned earlier the participation was nothing but Amazing. I will write about the winners and the best answers in short time. Meanwhile I will give to the point answers to above puzzles. Solution 1: When you convert character or binary expressions (char, nchar, nvarchar, varchar,binary, or varbinary) to an expression of a different data type, data can be truncated, only partially displayed, or an error is returned because the result is too short to display. Conversions to char, varchar, nchar, nvarchar, binary, and varbinary are truncated, except for the conversions shown in the following table. Reference of the text and table from MSDN. Solution 2: The shortest code to produce answer 1 : SELECT EXP($) or SELECT COS($) or SELECT DAY($) When SELECT $ it gives us the result as 0.00 and the EXP of the same is 1. I believe it is pretty neat. There were plenty other answers but this was the shortest. Another shorter answer would be PRINT EXP($) but no one has proposed that as in original Question I have explicitly mentioned SELECT in the original question. Bonus Answer: 5 OS: Windows, MacOS, Linux, Solaris, Joyent SmartOS Reference Please do read every single comment here. Do leave a comment which one do you think is the best comment out of all the comments. Meanwhile if there is a better solution and I have missed it do let me know as we still have time to correct it. I will be selecting the winner before the weekend as I am going through each and every of 350 comment. I will be selecting the best comments along with the winning comment. If our selection matches – one of you may still win something cool.  Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • links for 2011-02-09

    - by Bob Rhubart
    Tech Cast Live - Java and Oracle, One Year Later - February 15th 10AM PST (Oracle Technology Network Blog (aka TechBlog)) (tags: ping.fm) The impact of IT decisions on organizational culture - O'Reilly Radar "While I believe we recognize the limiting qualities of IT decisions, I'd suggest we've insufficiently studied the degree to which those decisions in aggregate can have a large influence on organizational culture." - Jonathan Reichental, Ph.D. (tags: ITgovernance organizationalculture enterprisearchitecture) Women "computers" of World War II - Boing Boing "Before it came to mean laptops, PCs, or even room-sized machines, "computer" was what you called a person who did mathematical calculations for a living. That job was vitally important during World War II. And, like many vital jobs on the homefront, it was turned over to women..." (tags: computers history worldwar2) InfoQ: Book Excerpt and Interview: 100 SOA Questions Asked and Answered A new "100 SOA Questions Asked and Answered " book by Kerrie Holley and Ali Arsanjani provides a deep insight into SOA covering a wide spectrum of topics from SOA basics to its business and organizational impact, to SOA methods and architecture to SOA future. InfoQ spoke with Kerrie Holley and Ali Arsanjani about their book. (tags: ping.fm) @myfear: GlassFish City - Another view onto your favorite application server Oracle ACE Director Markus Eisele runs GlassFish through CodeCity. (tags: oracle otn oracleace glassfish codecity) The Ron Batra Blog: Technology Whispers: Upcoming Presentations Oracle ACE Director Ron Batra shares details on upcoming presentations at OAUG events in the US and Dubai. (tags: oaug c11 oracle otn oracleace) Free ADF Training Event in the UK (Grant Ronald's Blog) Gobsmack survivor Grant Ronald with the details on an Oracle ADF training session he'll conduct on 11 May 2011 at the UK Oracle office in Reading. (tags: oracle otn adf) Java Spotlight Episode 16 - Richar Bair - The Java Spotlight Podcast The latest Java Spotlight podcast features an interview with Java Client Architect Richar Bair. (tags: oracle java podcast) Stewart Bryson: OBIEE 11g Migrations "[Rittman Mead's] Mark and Venkat have covered OBIEE migration methodologies in the past (see here, here and here), but I decided to throw my hat in the ring on the subject, as I had to develop a methodology for a client recently and wanted to share my experiences." - Stewart Bryson (tags: oracle otn obiee businessintelligence) Dr. Chris Harding: The golden thread of interoperability | Open Group Blog "There are so many things going on at every Conference by The Open Group that it is impossible to keep track of all of them, and this week’s Conference in San Diego, California, is no exception. The main themes are Cybersecurity, Enterprise Architecture, SOA and Cloud Computing." - Dr. Chris Harding (tags: entarch soa interoperability cloud) Marc Kelderman: OSB: Creating an Asynchronous / Fire-Forget WebService Call Creating a fire-and-forget call via OSB is simple, according to solution architect Marc Kelderman. "The trick is to send NO response back to the caller, only an HTTP response code, 200 or any other." (tags: oracle otn servicebus)

    Read the article

  • How To Watch Indian Union Budget 2011-2012 Live On Your PC

    - by Gopinath
    The Union Budget of India 2011-2012 will be tabled on Lok Sabha on 26th Feb, 2011. The finance minister, Mr. Pranab Muhkerjee will present the budget in the house and it will be broadcasted live on the TV channels(DD, NDTV, IBN Live and others) as well as on the web. For those who are willing to watch the budget session live on computers here is the required information. National Informatics Centre – Budget 2011 Live Web Cast Indian Government official stream of Union Budget 2011-2011 is available  at  http://budgetlive.nic.in/. This live stream will provide the budget information as is, without any masala, hype or so called analysis by experts sitting in private TV channel studios (and talking rubbish!) To view the primary live stream you need to install Windows Media Player plugin on your browser. As it’s a government website, better you use Internet Explorer browser to watch it. It may not work on Firefox or Chrome. Also the live stream is provided in other formats like Real Media and Flash. Here are the various streaming formats for you to choose Flash Player – High Bandwidth Stream Windows Media Player – Low Bandwidth Stream (IE browser is preferred) Windows Media Player – High Bandwidth Stream  (IE browser is preferred) Real Media Player – Low Bandwidth Stream Real Media Player – High Bandwidth Stream Indian Media Channels Covering Live Of Union Budget 2011 – 2012 Most of the news media channels live stream are available for free on the web, here are the few new channels you can watch live to follow Union Budget 2011 – 2012. NDTV 24 x 7 News Live Stream (English) NDTV India Live Streaming (Hindi) CNBC TV 18 Live Streaming (English) CNN IBN Live Streaming (English) IBN 7 Live Streaming (Hindi) Caution: In the name of analysis, most of the media channels mislead the viewers and provide base less information at times. Take utmost care in absorbing the information you see in any of the Indian news channel. This article titled,How To Watch Indian Union Budget 2011-2012 Live On Your PC, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Most Unprofessional Workplace

    - by TehGrumpyCoder
    I've worked lots of places in lots of roles: Delivery truck driver, Boilermaker, antenna rigger, Professional Musician, Electronic Technician, Electrical Engineer, and for most of my career: Software Turkey. I want to say this large company is the most unprofessional place I've ever worked, but then I think about other jobs such as TTI that stiffed us all for 10 months salary -- or had us work 2-1/2 years at 66% however you want to look at it, or maybe NeoPlanet with a cast from a bad sitcom running the show, I could go on, but I digress (as usual). So maybe this place isn't the *most* unprofessional, but the personnel rank up there. I'm in a small room off a factory. There are 3 managerial offices, and 36 common-folk of various skill-sets in a variety of single to quad cubicles. No matter where you sit though, because of the layout and location, you've got a hard wall as one wall of your cubicle. Because of that hard wall, everything echoes. I get off the phone, and the guy in the next cubicle makes a comment in response to my phone conversation... I hate that it can be heard and I hate that they do that! These people have no problem yelling from cube to cube to carry on running conversations some of which are actually work-related. There's a lady two cubes away that talks so loud I can clearly hear every phone conversation she has... all work-related but still... Then the one in the next cubicle must have been raised on a farm because there's only one volume setting: LOUD... "HEY MARGE, CAN I GET IN FOR A QUICK APPOINTMENT AFTER WORK TONIGHT?" ... sigh Also that cube is the 'party cube' so that's where all the candy, cake, donuts, and leftovers sits. Anything MzLoud brings in has to have a verbal recipe associated with it at least 10 times during the day, and of course at volume. I've had running conversations over the top of my cube from people in the next one on each side. The weird thing is... the boss sits with an open door closer to this whole fiasco than me. So I wear a pair of Bose noise-cancelling headphones, and crank up Kenny Burrell, Herb Ellis, Wes Montgomery, or Jimmy Smith to the point I can't hear the racket... what the heck, I already have a hearing loss from playing guitar.

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • OData Mix10 &ndash; Part Dos

    - by Jon Dalberg
    The other day I had a snazzy post on fetching all the video (WMV) files from Mix ‘10. A simple, console application that grabbed the urls from the OData feed and downloaded the videos. I wanted to change that app to fire the OData query asynchronously so here’s what resulted: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var temp = mix.Files.Where(f => f.TypeName == "WMV"); 6: var query = temp as DataServiceQuery<Mix.File>; 7:   8: query.BeginExecute(OnFileQueryComplete, query); 9:   10: // waiting... 11: Console.ReadLine(); 12: } 13:   14: static void OnFileQueryComplete(IAsyncResult result) 15: { 16: var query = result.AsyncState as DataServiceQuery<Mix.File>; 17: var response = query.EndExecute(result); 18:   19: var web = new WebClient(); 20:   21: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 22:   23: Directory.CreateDirectory(myVideos); 24:   25: foreach (Mix.File f in response) 26: { 27: var fileName = new Uri(f.Url).Segments.Last(); 28: Console.WriteLine(f.Url); 29: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 30: } 31: } There are two important things here that are not explained well in the MSDN docs: See lines 5 and 6? That’s where I query for the WMV files and it returns an IQueryable<T>. You *have* to cast that to a DataServiceQuery<T> and then call BeginExecute. The documented example does not filter so it didn’t show that step. Line 16 shows the correct way to get the previously executed DataServiceQuery<T> from the async result. If you looked at the MSDN example docs it shows (incorrectly) just casting the result, like this: // wrong var query = result as DataServiceQuery<Mix.File>; Other than those items it is relatively straight forward and we’re all async-ified. Enjoy!

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Use CSS Selectors with HtmlUnit

    - by kerry
    HtmlUnit is a great library for performing web integration tests in Java.  But sometimes node traversal can be somewhat cumbersome. Fear not fellow automated tester (good for you!).  I found a great little project on Github that will allow you to query your document for elements via css selectors similar to jQuery. The project is located at https://github.com/chrsan/css-selectors.  You can use Maven to build it, or download 1.0.2 here.  Beware.  I will not be updating this link so I suggest you download the latest code. In any case, you can use it like so: // from HtmlUnit getting started final WebClient webClient = new WebClient(); final HtmlPage page = webClient.getPage("http://htmlunit.sourceforge.net"); final DOMNodeSelector cssSelector = new DOMNodeSelector(page.getDocumentElement()); final Set elements = cssSelector.querySelectorAll("div.section h2"); final Node first = elements.iterator().next(); assertThat(first.getTextContent(), equalTo("HtmlUnit")); The only problem here is that the querySelectAll returns a Set<Node>.  Not HtmlElement like we may want in some cases.   However, if you were to reflect on the Set, you would find that it is indeed a Set of HtmlElement objects. Typically, I like to create a base class for my web tests.  Just for fun, I am using the $ method similar to jQuery. public class WebTestBase { protected WebClient webClient; protected HtmlPage htmlPage; protected void goTo(final String url){ return (HtmlPage)webClient.getPage(url); } protected List $(final String cssSelector) { final DOMNodeSelector cssSelector = new DOMNodeSelector(htmlPage.getDocumentElement()); final Set nodes = cssSelector.querySelectorAll("div.section h2"); // for some reason Set cannot be cast to Set? final List elements = new ArrayList(nodes.size()); for (final Node node : nodes) { elements.add((HtmlElement)node); } return elements; } } Now we can write tests like this: public class LoginWebTest extends WebTestBase { @Test public void login_page_has_instructions() throws Exception { goTo(baseUrl + "/login") assertThat( $("p.instructions").size(), equalTo(1) ); } }

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • iscsitarget suddenly broken after upgrade of the 12.04 Hardware Stack

    - by RapidWebs
    After an upgrade to the latest Hardware Stack using Ubuntu 12.04, my iscsi service is not longer operational. The error from the service is such: FATAL: Module iscsi_trgt not found. I have learned that I might need to reinstall the package iscsitarget-dkms. this package builds a driver or something during installation, from source. During this build process, it reports and error, and now has also broke my package manager. Here is the relevant output: Building module: cleaning build area.... make KERNELRELEASE=3.13.0-34-generic -C /lib/modules/3.13.0-34-generic/build M=/var/lib/dkms/iscsitarget/1.4.20.2/build........(bad exit status: 2) Error! Bad return status for module build on kernel: 3.13.0-34-generic (i686) Consult /var/lib/dkms/iscsitarget/1.4.20.2/build/make.log for more information. Errors were encountered while processing: iscsitarget E: Sub-process /usr/bin/dpkg returned an error code (1) and this is the information provided by make.log: or iscsitarget-1.4.20.2 for kernel 3.13.0-34-generic (i686) Fri Aug 15 22:07:15 EDT 2014 make: Entering directory /usr/src/linux-headers-3.13.0-34-generic LD /var/lib/dkms/iscsitarget/1.4.20.2/build/built-in.o LD /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/built-in.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/tio.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/iscsi.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/nthread.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c: In function ‘worker_thread’: /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:73:28: error: void value not ignored as it ought to be /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:3: error: implicit declaration of function ‘get_io_context’ [-Werror=implicit-function-declaration] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:21: warning: assignment makes pointer from integer without a cast [enabled by default] cc1: some warnings being treated as errors make[2]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o] Error 1 make[1]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel] Error 2 make: * [module/var/lib/dkms/iscsitarget/1.4.20.2/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.13.0-34-generic' I am at a loss on how to resolve this issue. any help would be appreciated!

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Design for an interface implementation that provides additional functionality

    - by Limbo Exile
    There is a design problem that I came upon while implementing an interface: Let's say there is a Device interface that promises to provide functionalities PerformA() and GetB(). This interface will be implemented for multiple models of a device. What happens if one model has an additional functionality CheckC() which doesn't have equivalents in other implementations? I came up with different solutions, none of which seems to comply with interface design guidelines: To add CheckC() method to the interface and leave one of its implementations empty: interface ISomeDevice { void PerformA(); int GetB(); bool CheckC(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { return true; // without checking anything } } This solution seems incorrect as a class implements an interface without truly implementing all the demanded methods. To leave out CheckC() method from the interface and to use explicit cast in order to call it: interface ISomeDevice { void PerformA(); int GetB(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } } class DeviceManager { private ISomeDevice myDevice; public void ManageDevice(bool newDeviceModel) { myDevice = (newDeviceModel) ? new DeviceModel1() : new DeviceModel2(); myDevice.PerformA(); int b = myDevice.GetB(); if (newDeviceModel) { DeviceModel1 newDevice = myDevice as DeviceModel1; bool c = newDevice.CheckC(); } } } This solution seems to make the interface inconsistent. For the device that supports CheckC(): to add the logic of CheckC() into the logic of another method that is present in the interface. This solution is not always possible. So, what is the correct design to be used in such cases? Maybe creating an interface should be abandoned altogether in favor of another design?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >