Search Results

Search found 18079 results on 724 pages for 'compiler options'.

Page 16/724 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Configuring gcc compiler switches in Qt / QtCreator / QMake

    - by andand
    I recently tried to use Qt Creator 1.3.2 / Qt 4.6.2 / gcc 4.4.0 (32-bit version) on Windows 7 (64-bit) to compile an application using some of the experimental C++0x extensions and encountered the following (fatal) error: This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. In my search for a solution, I came across this thread, and added the following to the .pro file: CXXFLAGS += -std=c++0x but that didn't seem to make a difference. So, I expect there's some tag I need to add to the .pro (project) file, but I've never messed with the gcc compiler switches in Qt / QMake / QtCreator before, and am uncertain about the proper invokation / incantation. So, my question is how do you set gcc compiler switches when using QtCreator / QMake / Qt?

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • How to use xcode compiler warnings to determine minimum IOS deployment target

    - by Martin Bayly
    I'm building an iOS app using Xcode 3.2.5 with the Base SDK set to iOS 4.2 I know I've used some api's from 4.0 and 4.1 but not sure about whether I actually require 4.2. According to the iOS Development Guide, "Xcode displays build warnings when it detects that your application is using a feature that’s not available in the target OS release". So I was hoping to use the compiler warnings to derive my minimum OS requirement. However, even when I set my iOS Deployment Target to iOS 3.0, I still don't get any compiler warnings. I must be doing something wrong, but not sure what? Can anyone confirm that they get compiler warnings when the iOS deployment target is less than the base SDK and the code uses base SDK functions? Or do the compiler warnings only show if you link a framework that didn't exist in the iOS deployment target version?

    Read the article

  • SIP UAS asks for OPTIONS

    - by TacB0sS
    Hey, I have UAC that registers to a UAS, after registration the UAS sends me an OPTIONS request, what should I answer it? only the audio media streams? Update I: Allow me to explain myself better... if I want to invite someone to a session I USE the INVITE method and negotiate the media then, for that specific session. But once I register to the server, and it asks me for OPTIONS, then what should I supply, everything my client supports? once I answer it would it deduce that every INVITE I would request from now on would use these medias? or would I need to supply new media with every request? Update II: Hi Wiz, I was in the process of building a negotiation system, so i tried it out and replied the UAS here is the sort dialog we had: OPTIONS sip:[email protected] SIP/2.0 Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> Contact: <sip:[email protected]> Call-ID: [email protected] CSeq: 102 OPTIONS User-Agent: Freeswitch 1.2.3 Max-Forwards: 70 Date: Sat, 05 Jun 2010 12:06:43 GMT Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Content-Length: 0 OPTIONS In Response To 102: SIP/2.0 200 OK Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> CSeq: 102 OPTIONS Call-ID: [email protected] Allow: INVITE,CANCEL,ACK,BYE,OPTIONS Content-Type: application/sdp Content-Length: 248 v=0 o=310 4515233118481497946 4515233118481497946 IN IP4 10.0.0.1 s=- i=Nu-Art Software - TacB0sS VoIP information c=IN IP4 10.0.0.1 m=audio 40000 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 This response caused the server to stop sending me the options request, does this means I can only use these parameters with the server now? or as you said, it does not matter? Thanks, Adam.

    Read the article

  • g++ compiler complains about conversions between related types (from int to enum, from void* to clas

    - by Slav
    g++ compiler complains about conversions between related types (from int to enum, from void* to class*, from const char* to unsigned char*, etc.). Compiler handles such convertions as errors and won't compile furthermore. It occurs only when I compile using Dev-C++ IDE, but when I compile the same code (using the compiler which Dev-C++ uses) such errors (even warnings) do not appears. How to mute errors of such types?

    Read the article

  • Java compiler rejects variable declaration with parameterized inner class

    - by Johansensen
    I have some Groovy code which works fine in the Groovy bytecode compiler, but the Java stub generated by it causes an error in the Java compiler. I think this is probably yet another bug in the Groovy stub generator, but I really can't figure out why the Java compiler doesn't like the generated code. Here's a truncated version of the generated Java class (please excuse the ugly formatting): @groovy.util.logging.Log4j() public abstract class AbstractProcessingQueue <T> extends nz.ac.auckland.digitizer.AbstractAgent implements groovy.lang.GroovyObject { protected int retryFrequency; protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; public AbstractProcessingQueue (int processFrequency, int timeout, int retryFrequency) { super ((int)0, (int)0); } private enum ProcessState implements groovy.lang.GroovyObject { NEW, FAILED, FINISHED; } private class ProcessingQueueMember<E> extends java.lang.Object implements groovy.lang.GroovyObject { public ProcessingQueueMember (E object) {} } } The offending line in the generated code is this: protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; which produces the following compile error: [ERROR] C:\Documents and Settings\Administrator\digitizer\target\generated-sources\groovy-stubs\main\nz\ac\auckland\digitizer\AbstractProcessingQueue.java:[14,96] error: improperly formed type, type arguments given on a raw type The column index of 96 in the compile error points to the <T> parameterization of the ProcessingQueueMember type. But ProcessingQueueMember is not a raw type as the compiler claims, it is a generic type: private class ProcessingQueueMember <E> extends java.lang.Object implements groovy.lang.GroovyObject { ... I am very confused as to why the compiler thinks that the type Queue<ProcessingQueueMember<T>> is invalid. The Groovy source compiles fine, and the generated Java code looks perfectly correct to me too. What am I missing here? Is it something to do with the fact that the type in question is a nested class? (in case anyone is interested, I have filed this bug report relating to the issue in this question) Edit: Turns out this was indeed a stub compiler bug- this issue is now fixed in 1.8.9, 2.0.4 and 2.1, so if you're still having this issue just upgrade to one of those versions. :)

    Read the article

  • Eclipse compiler with APT

    - by westcam
    What is the correct way to call the Eclipse compiler with an APT processor from Java? I am using the following Maven dependency for the compiler <dependency> <groupId>org.eclipse.jdt.core.compiler</groupId> <artifactId>ecj</artifactId> <version>3.5.1</version> </dependency> I want to test an APT processor with the Eclipse compiler in addition to Javac.

    Read the article

  • g++ compiler complains about conversions between relative types (from int to enum, from void* to cla

    - by Slav
    g++ compiler complains about conversions between relative types (from int to enum, from void* to class*, from const char* to unsigned char*, etc.). Compiler handles such convertions as errors and won't compile furthermore. It occurs only when I compile using Dev-C++ IDE, but when I compile the same code (using the compiler which Dev-C++ uses) such errors (even warnings) do not appears. How to mute errors of such types?

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • database independent coding framework options?

    - by statirasystems
    Background: I have not programmed in a while besides doing VBA and a little VB.NET. So please forgive my language use. I'm green and have a head cold. I am reading all I can now, but I have no programming circles to draw from. The information I am providing is to help guide you to what I am looking for. I am not confident I can ask the question properly. Story: I have four different projects that I am starting. Obviously I won't be working on all at the same time however they each will have similar needs and be inter related. They are as follows: Desktop Environment/System User Interface - basically a product that runs on major computers via mono or .net that unifies the look and functions. In the context of the up coming question it would be able to directly access data of various types. It would work in tandum with my office suite, system manager, and network application framework. Office Suite - technically it would not be a suite since I will be doing it from one interfacel except for the Communications Application. As far as the question, it will need to be able to link to various data sources for storing files and using, manipulating, and presenting information. System Manager - an intellegent system to manage and administer the entire network and all equipment. As far as the question, needs to be able to access data for archiving and and for accessing it's own settings stored in various formats, sql or xml. Network Application Framework - A complete system that can be used for ERP, CRM, CMS, Errata, File Management, and so on. As to the question to be able to access it's own or interlink with existing applications. Requirement: C#, Simplifies and reduces coding, use the same code to access diffent databases(ie MySQL, MS SQL, ACCESS, XML, ...), Mono would be nice but not a must, Question: What librarys, frameworks, or other options would be able to help with this? Is there a good resource to guide me? I don't want arguing over what is best, just information to help me further understand and make an educated decision.

    Read the article

  • Varnish Running VCC-compiler failed on purge

    - by FLX
    I've been following this guide which uses this default.vcl. However, when starting Varnish I get the following error: * Starting HTTP accelerator [fail] storage_malloc: max size 1024 MB. Message from VCC-compiler: Expected '(' got ';' (program line 341), at (input Line 43 Pos 22) purge; ---------------------# Running VCC-compiler failed, exit 1 VCL compilation failed Which means that there is something wrong with purge here: sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } } I don't see anything wrong, can someone explain? Thanks!

    Read the article

  • How can I define a clojure type that implements the servlet interface?

    - by Rob Lachlan
    I'm attempting to use deftype (from the bleeding-edge clojure 1.2 branch) to create a java class that implements the java Servlet interface. I would expect the code below to compile (even though it's not very useful). (ns foo [:import [javax.servlet Servlet ServletRequest ServletResponse]]) (deftype servlet [] javax.servlet.Servlet (service [this #^javax.servlet.ServletRequest request #^javax.servlet.ServletResponse response] nil)) But it doesn't compile. The compiler produces the message: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Which doesn't make sense to me, because I'm returning nil. So the fact that the return type of the method is void shouldn't be a problem. For instance, for the java.util.Set interface: (deftype bar [#^Number n] java.util.Set (clear [this] nil)) compiles without issue. So what am I doing wrong with the Servlet interface? To be clear: I know that the typical case is to subclass one of the servlet abstract classes rather than implement this interface directly, but it should still be possible to do this. Stack Trace: The stack trace for the (deftype servlet... is: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Restarts: 0: [ABORT] Return to SLIME's top level. Backtrace: 0: clojure.lang.Compiler$NewInstanceMethod.parse(Compiler.java:6461) 1: clojure.lang.Compiler$NewInstanceExpr.build(Compiler.java:6119) 2: clojure.lang.Compiler$NewInstanceExpr$DeftypeParser.parse(Compiler.java:6003) 3: clojure.lang.Compiler.analyzeSeq(Compiler.java:5289) 4: clojure.lang.Compiler.analyze(Compiler.java:5110) 5: clojure.lang.Compiler.analyze(Compiler.java:5071) 6: clojure.lang.Compiler.eval(Compiler.java:5347) 7: clojure.lang.Compiler.eval(Compiler.java:5334) 8: clojure.lang.Compiler.eval(Compiler.java:5311) 9: clojure.core$eval__4350.invoke(core.clj:2364) 10: swank.commands.basic$eval_region__673.invoke(basic.clj:40) 11: swank.commands.basic$eval_region__673.invoke(basic.clj:31) 12: swank.commands.basic$eval__686$listener_eval__687.invoke(basic.clj:54) 13: clojure.lang.Var.invoke(Var.java:365) 14: foo$eval__2285.invoke(NO_SOURCE_FILE) 15: clojure.lang.Compiler.eval(Compiler.java:5343) 16: clojure.lang.Compiler.eval(Compiler.java:5311) 17: clojure.core$eval__4350.invoke(core.clj:2364) 18: swank.core$eval_in_emacs_package__320.invoke(core.clj:59) 19: swank.core$eval_for_emacs__383.invoke(core.clj:128) 20: clojure.lang.Var.invoke(Var.java:373) 21: clojure.lang.AFn.applyToHelper(AFn.java:169) 22: clojure.lang.Var.applyTo(Var.java:482) 23: clojure.core$apply__3776.invoke(core.clj:535) 24: swank.core$eval_from_control__322.invoke(core.clj:66) 25: swank.core$eval_loop__324.invoke(core.clj:71) 26: swank.core$spawn_repl_thread__434$fn__464$fn__465.invoke(core.clj:183) 27: clojure.lang.AFn.applyToHelper(AFn.java:159) 28: clojure.lang.AFn.applyTo(AFn.java:151) 29: clojure.core$apply__3776.invoke(core.clj:535) 30: swank.core$spawn_repl_thread__434$fn__464.doInvoke(core.clj:180) 31: clojure.lang.RestFn.invoke(RestFn.java:398) 32: clojure.lang.AFn.run(AFn.java:24) 33: java.lang.Thread.run(Thread.java:637)

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Options for different domain and hosting

    - by Carl
    The situation I have a hosting service (one.com) on which I have installed a wordpress.org site in a subdirectory 'wordpress': myhost.com/wordpress/ (myhost.com is actually my own domain, but it already has contents and I don't want wordpress/ to appear in the root of that domain.) I want to use a second domain for this site. Thinking I would be able to forward to the wordpress site without problems, I registered the domain at GoDaddy.com: mydomain.com What I want So when my visitors type in mydomaind.com, I want them to see the contents on myhost.com/wordpress/, and the same for all subpages (mydomain.com/a/subpage fetches from myhost.com/wordpress/a/subpage). Just a redirect isn't enough, I want my visitors to see only mydomain.com as their domain. Some notes If I set up forward with URL masking at GoDaddy, they just give a full frame, pointing to myhost.com/wordpress/. This isn't good enough for me, since mydomain.com will always show up in the adress bar, also for subpages (I want mydomain/a/subpage to show in the adress bar for a subpage). I believe this could in principle be done with a .htaccess file with URL rewriting, but I have no hosting with GoDaddy so I can't upload such a file there. Hosting with GoDaddy is very expensive (of course) so I don't want to do that. I don't think I can use DNS settings; the host of mydomain.com says they don't allow anyone else to point to their name servers. If possible, I wouldn't want to re-install the wordpress site, it would take quite some time. I'd prefer to keep it at myhost.com/wordpress/ (if possible) Anything involving transferring the domain is supposed to take 5-7 working days. I would need my site up-and-running earlier than that, so I'd like to avoid it if possible. Am I locked in? As it seems, I am rather locked-in with GoDaddy. I can't use the domain with .htaccess since I can't upload such a file (and won't pay for hosting by GoDaddy). I can't use any of their forward options since none of them do what I want (one just forwards, the one that masks the URL does it with frames). Would you agree? Possible solutions Transfer the domain to any hosting service with reasonable hosting pricing, as opposed to GoDaddy (I'd probably use one.com, the same host as for myhost.com, in that case), and there either re-install wordpress on the new account, or use .htaccess with URL rewrite on the new account to fetch the contents from myhost.com/wordpress/. Can this be set-up to work with sub-pages as well? And visitors won't ever see "myhost.com/wordpress", just "mydomain.com"? E.i., mydomain.com/a/subpage/ wold fetch from myhost.com/wordpress/a/subpage/? This might be a long shot but: Find some free (preferrably) hosting allowing to point to their nameservers Make DNS settings at GoDaddy so that my domain appears at the site above at that site, put a .htaccess file with URL rewriting to forward to myhost.com/wordpress/ Could this be possible? What services could I use in that case? As I see it, this would be the only way not to have to transfer a domain (taking 5-7 working days) and not having to re-install the wordpress site. Sorry for the long question. All info and ideas are welcome.

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • How to make php closure compiler write to a predefined file name?

    - by Mohammad
    I'm quite embarrassed to ask this but I've been trying to figure it out all day with no luck. http://code.google.com/p/php-closure/source/browse/trunk/php-closure.php on Line 172 the write() function get the md5 encoded name via the _getHash() function on Line 276. I was wondering how I could alter this code to write the compiled code to a predefined name like copiled_code.js I've tried altering the _getCacheFileName() function on Line 272 to this function _getCacheFileName() { //return $this->_cache_dir . $this->_getHash() . ".js"; return 'copiled_code.js'; } without any results. Thanks to all of you in advance!

    Read the article

  • Does the compiler provides extra stack space for byte-spilling?

    - by xuwicha
    From the sample code below which I got here, I don't understand why the value of registers are move to specific part in stack when byte-spilling is performed. pushq %rbp movq %rsp, %rbp subq $96, %rsp leaq L__unnamed_cfstring_23(%rip), %rax leaq L__unnamed_cfstring_26(%rip), %rcx movl $42, %edx leaq l_objc_msgSend_fixup_alloc(%rip), %r8 movl $0, -4(%rbp) movl %edi, -8(%rbp) movq %rsi, -16(%rbp) movq %rax, -48(%rbp) ## 8-byte Spill movq %rcx, -56(%rbp) ## 8-byte Spill movq %r8, -64(%rbp) ## 8-byte Spill movl %edx, -68(%rbp) ## 4-byte Spill callq _objc_autoreleasePoolPush movq L_OBJC_CLASSLIST_REFERENCES_$_(%rip), %rcx movq %rcx, %rdi movq -64(%rbp), %rsi ## 8-byte Reload movq %rax, -80(%rbp) ## 8-byte Spill callq *l_objc_msgSend_fixup_alloc(%rip) movq L_OBJC_SELECTOR_REFERENCES_27(%rip), %rsi movq %rax, %rdi movq -56(%rbp), %rdx ## 8-byte Reload movl -68(%rbp), %ecx ## 4-byte Reload And also, I don't know what is the purpose of byte-spilling since the program logic can still be achieved if the function is the one saving the value of the registers it will be used inside it. I really have no idea why is this happening. Please help me understand this.

    Read the article

  • Why is the compiler caching my "random" and NULLED variables?

    - by alex gray
    I am confounded by the fact that even using different programs (on the same machine) to run /compile, and after nilling the vaues (before and after) the function.. that NO MATTER WHAT.. I'll keep getting the SAME "random" numbers… each and every time I run it. I swear this is NOT how it's supposed to work.. I'm going to illustrate as simply as is possible… #import <Foundation/Foundation.h> int main(int argc, char *argv[]) { int rPrimitive = 0; rPrimitive = 1 + rand() % 50; NSNumber *rObject = nil; rObject = [NSNumber numberWithInt:rand() % 10]; NSLog(@"%i %@", rPrimitive, rObject); rPrimitive = 0; rObject = nil; NSLog(@"%i %@", rPrimitive, rObject); return 0; } Run it in TextMate: i686-apple-darwin11-llvm-gcc-4.2 8 9 0 (null) Run it in CodeRunner: i686-apple-darwin11-llvm-gcc-4.2 8 9 0 (null) Run it a million times, if you'd like. You can gues what it will always be. Why does this happen? Why oh why is this "how it is"?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >