Search Results

Search found 33424 results on 1337 pages for 'natural language process'.

Page 526/1337 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • ASP.NET 3.5 Functions and Subroutines

    The most basic of all ASP.NET 3.5 server side scripts that I ve covered using the Visual Basic programming language is not modular in nature. This means that an ASP.NET 3.5 server will interpret the scripts in the Visual Basic file e.g Default.aspx.vb from top to bottom. In most real-world applications that use Visual Basic in ASP.NET websites however most web developers structure their programs in modules. This article will give you information about subroutines and functions along with practical examples and their advantages.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Animation file format

    - by Paul
    I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.

    Read the article

  • Is Java easy decompilation a factor worth considering

    - by Sandra G
    We are considering the programming language for a desktop application with extended GUI use (tables, windows) and heavy database use. We considered Java for use however the fact that it can be decompiled back very easily into source code is holding us back. There are of course many obfuscators available however they are just that: obfuscators. The only obfuscation worth doing we got was stripping function and variables names into meaningless letters and numbers so that at least stealing code and renaming it back into something meaningful is too much work and we are 100% sure it is not reversible back in any automated way. However as it concerns to protecting internals (like password hashes or sensible variables content) we found obfuscators really lacking. Is there any way to make Java applications as hard to decode as .exe counterparts? And is it a factor to consider when deciding whether to develop in Java a desktop application?

    Read the article

  • What IDEs are available for Ubuntu?

    - by Roland Taylor
    This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. See the FAQ for more information. This is a community wiki for IDEs available on Ubuntu. Please post one IDE per answer (including more than just a screenshot or a link, please at least put a short description). In your answer, tell us what the IDE is for (which language(s) or if it is RAD capable).

    Read the article

  • Are JSP and Java still relevant?

    - by dyyyy
    I've been working so long in java and jsp, that for me it's a no-brainer to use it. But now as I'm starting to do my own framework for web applications, I'm wondering if jsp is the right choice. Seing how much php is popular (as well as other languages as ruby and python) Is JSP still a relevant language. Does it have any clear advantage over other languages ? I don't want to use it just because i know it better. So please considering I know nothing, is there a reason to use JSP and JAVA? Thank you

    Read the article

  • How could there still not be a mysqldb module for Python 3?

    - by itsadok
    This SO question is now more than two years old. MySQL is an incredibly popular database engine, Python is an incredibly popular programming language, and Python 3 has been officially released two years ago, and was available even before that. What's more, the whole mysqldb module is just a layer translating Python's db-api to MySQL's API. It's not that big of a library. I must be missing something here. How come almost* nobody in the entire open source community has spent the (I'm guessing) two weeks it takes to port this lib? Is Python 3 that unpopular? Is the combination of python and mysql not as common as I assume? Or maybe it's just a lot harder to port mysqldb than I assume? Anyone know the inside story on this? * Now I see that this guy has done it, which takes some of the wind out of my question, but it still seems to little and too late to make sense.

    Read the article

  • How to set Monday as the first day of the week in GNOME Calendar applet?

    - by Jonik
    What is the recommended way to change the first day of the week to Monday (instead of Sunday, as in the screenshot below)? I couldn't find anything related in Clock Preferences, nor in System - Preferences, or System - Administration. This probably has something to do with tweaking locales, so here's (possibly relevant) output from locale: LANG=en_US.utf8 LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" ... LC_ALL= NB: I want to keep English as the UI language both in GNOME and on command line. Dates are currently displayed like this (e.g. ls -l): 2010-10-06 15:32, and I also want to keep that as it is.

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Unable to install files with apt-get: "unable to locate package"

    - by Ben Casling
    I'm having issues with my ubuntu server version 12.04 installed on a HP550 laptop, when i try sudo apt-get install <programname>, e.g apache2 it will not work, saying E: Unable to locate package apache2. I have tried to look/edit the sources. but they will not work either the gedit command is broken too, i am trying gedit /etc/apt/sources.list for those wondering, is this a case of the computer network not configured properly? it downloaded a language pack easily enough in the installation though. how do i fix this? a prompt reply would be appreciated.

    Read the article

  • How does throwing an ArgumentNullException help?

    - by Scott Whitlock
    Let's say I have a method: public void DoSomething(ISomeInterface someObject) { if(someObject == null) throw new ArgumentNullException("someObject"); someObject.DoThisOrThat(); } I've been trained to believe that throwing the ArgumentNullException is "correct" but an "Object reference not set to an instance of an object" error means I have a bug. Why? I know that if I was caching the reference to someObject and using it later, then it's better to check for nullity when passed in, and fail early. However, if I'm dereferencing it on the next line, why are we supposed to do the check? It's going to throw an exception one way or the other. Edit: It just occurred to me... does the fear of the dereferenced null come from a language like C++ that doesn't check for you (i.e. it just tries to execute some method at memory location zero + method offset)?

    Read the article

  • Is there a good book to grok C++?

    - by Paperflyer
    This question got me thinking. I would say I am a pretty experienced C++ programmer. I use it a lot at work, I had some courses on it at the university, I can understand most C++ code I find out there without problems. Other languages you can pretty much learn by using them. But every time I use a new C++ library or check out some new C++ code by someone I did not know before, I discover a new set of idioms C++ has to offer. Basically, this has lead me to believe that there is a lot of stuff in C++ that might be worth knowing but that is not easily discoverable. So, is there a good book for a somewhat experienced C++ programmer to step up the game? You know, to kind of 'get' that language the way you can 'get' Ruby or Objective-C, where everything just suddenly makes sense and you start instinctively knowing 'that C++ way of thing'?

    Read the article

  • How to explain OOP concepts to a non technical person?

    - by John
    I often try to avoid telling people I'm a programmer because most of the time I end up explaining to them what that really means. When I tell them I'm programming in Java they often ask general questions about the language and how it differs from x and y. I'm also not good at explaining things because 1) I don't have that much experience in the field and 2) I really hate explaining things to non-technical people. They say that you truly understand things once you explain them to someone else, in this case how would you explain OOP terminology and concepts to a non technical person?

    Read the article

  • If you had three months to learn one relatively new technology, which one would you choose?

    - by Ivo van der Wijk
    This question was taken from CodingHorror. On my list would be (and some actually are): Android Development (and possibly iPhone development) Go language and its concurrency NoSQL, specifically CouchDB RCTK, which happens to be my own idea / project (but all ideas have been thought or already, what matters is my implementation) But I don't think I'm being cutting-edge/thinking-outside-the-box here. What's on your list? Please don't restrict yourself to the list above - that's my list. I'm interested in hearing what others find interesting new technology.

    Read the article

  • Need Japanese IME with Dvorak keyboard with 13.10

    - by user916792
    I recently did a fresh install of 13.10 and enabled Japanese language support. In the past this seemed to work seamlessly with my other keyboard settings when I set up my Dvorak keyboard. But this time when I toggle the IME on it uses the standard QWERTY layout. Typing English uses the Dvorak layout as I expect. This just worked for me in the past to the point where I stopped paying attention to what the underlying IME is and I don't recall any special steps other then choosing my keyboard layout and enabling Japanese. Any help would be appreciated!

    Read the article

  • Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs

    - by Steven A. Lowe
    Version control for ladder logic (LL) programs for programmable logic controllers (PLCs) seems to be virtually non-existent. It may be because LL is a visual language and tends to be stored in binary files, or it may be because source code control hasn't "caught on" in process control engineering circles - or perhaps my Google-Fu is weak tonight. Do you know of any realistic and useful solutions for version control for such systems? Definitions: realistic = changes to the programs are tracked by user and subject to reversion and merges useful = the system integrates with visual LL designers, is not limited to LL from a single PLC manufacturer, and does not cost a ridiculous amount of money? Note: I have heard of people using SVN or Mercurial et al to track the binary files, but I don't think the diff/merge capabilities would display readable differences.

    Read the article

  • Version Changes: How considerable are the compatibility issues in project?

    - by Aditya P
    For example if we consider ActionScript2.0(based on Objects but programming does not implement much OOP ) vs 3.0(highly OOP) its like a whole new scripting language in the sense of approach, programming style,features you get the idea. In PHP we can see current versions going from 3-5. brief version changes Question :Developers who work on PHP is it easy to migrate from version to version? Question :Are there any extensive compatibility issues, forward or backward? Question :Does your project stick to a particular version till the end ? Question :Does the programming style ,approach change from version to version? Question :If you had to get started on PHP to contribute to a project built earlier versions, would learning the latest version be counterproductive towards this aim? Some related topics i had come across on SE How should I be keeping track of php script version/changes? What is happening to PHP 6? It would be Really helpful in understanding if you could answer this topic directly to the questions put forth.

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • A plan to study ASP.NET + C# + SQL + SQL Server [closed]

    - by ali saleem
    Possible Duplicates: Should I be a professional in C# programming in order to build good web applications using ASP.NET? Is there a combination of language and database that is both great to use and free/cheap? C# for web development? or C# as general purpose programming? ASP.NET MVC book for absolute beginners Will it cost me a lot if I chose ASP.NET and IIS? Is it possible to use MySQL in ASP.NET? Best books to start with ASP.NET MVC / C# and Visual Studio Is it enough for me to learn the above technologies to become a professional web developer? If so then how can I learn them? together or to start with C# for example at first? If there is another thing I should learn please tell me about it.

    Read the article

  • How can I revive a dead translation team?

    - by Rohan
    I wish to translate ubuntu in Marathi language for which a translation team already exists. But the membership of the team is moderated and unfortunately no new member has been admitted in the team after 2010-Dec-12. All request of membership are pending after that date. I tried to contact administrator of the team at his personal email id but did not get any reply from him. As I am not a team member I can not upload my translated po file. Is there any way to take charge of the team and approve all pending request? It seems that no translation work is currently being carried out. I would like to do the work but I can not.

    Read the article

  • How to self Motivate technically to put my ideas into execution or just getting a job at MNC like google or microsoft..

    - by Demla Pawan
    I mean, How to self Motivate to get a job at google or create another google in future. ,as there is no mentor who can guide me on this topic, so asked it here: I'm a Graduate in BE IT,but with less grades,with interest in learning new programming languages, but not yet done anything great like developed some system or anything. And I'm left with 2 more years to prove my worth to someone. So,is their a quick guide to start learning a language and then just go on implementing your ideas and it gets appreciated or I get a good Job ant Big MNC's. By the way, I just build one website for my one client and running my wordpress blog. And I had tried my hands on basic of C++,Java,JS,JSP,PHP,Ubuntu,web designing in past.

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • Good Literature for "Object oriented programming in C"

    - by Dipan Mehta
    This is not a debate question about whether or not C is a good candidate for Object oriented programming or not. Quite often C is the primary platform where the development is happening. I have seen, and hopefully learnt through crawling many open source and commercial projects - that while the language inherently doesn't stop you if you create "non-object" code. However, you can still think in the "Object" way and reasonably write code that captures this designs thinking. For those who has done this, OO way is still the best way to write code even when you are programming in C. While, I have learnt most of it through the hard way, are there any deep literature that can help educate the relatively young guys about how to do OO programming in C?

    Read the article

  • What is missing and should be added to Code Complete 3rd Edition? [closed]

    - by Peter Turner
    It's been quite a few years since Code Complete was published. I really love the book, I keep it in the bathroom at the office and read a little out of it once or twice a day. What developments in computer software... development need to be added to Code Complete 3e, and for the sake of reductionism, what should be removed to make room for them? Is it necessary even possible to call Code Complete Code Complete if it doesn't have language features that even Delphi has like anonymous methods and generics? Also, what languages would be more appropriate than C++ to use for a majority of code examples?

    Read the article

  • Is your company thinking of transitioning from java to another technology?

    - by Augusto
    As every Java developer knows, Oracle bought Sun and the future of java looks quite unclear, specially since Oracle wants to monetize the JVM. Java as a language has also been stale in the last few years, the non-inclusion of closures is one example (which might be included in java 1.8) At the same time, some new technologies such as Ruby, Scala and Groovy are being used to deliver complex sites. I'm wondering if there are companies or organizations which are talking, doing spikes or starting to use a different technology, with the idea to stop using java for green field projects, in the same way that 15 years ago companies migrated form C++, perl and other technologies to Java. I'm also interested to know what are the impressions of this happening, for example: planning to migrate to a different technology in 2 years. To be clear, I'm not asking which technology is better. I'm asking if your organization is thinking to leave Java for another technology.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >