Search Results

Search found 27989 results on 1120 pages for 'junior software developer'.

Page 377/1120 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Upgrade to Xubuntu 13.10 - Saucy Salamander

    As a common 'fashion' it is possible to upgrade an existing installation of Ubuntu or one of its derivates every six months. Of course, you might opt-in for the adventure and directly keep your system always on the latest version (including alphas and betas), or you might like to play safe and stay on the long-term support (LTS) versions which are updated every two years only. As for me, I'd like to jump from release to release on my main desktop machine. And since 17th October Saucy Salamander or also known as Ubuntu 13.10 has been released for general use. The following paragraphs document the steps I went in order to upgrade my system to the recent version. Don't worry about the fact that I'm actually using Xubuntu. It's mainly a flavoured version of Ubuntu running Xfce 4.10 as default X Window manager. Well, I have Gnome and LXDE on the same system... just out of couriosity. Preparing the system Before you think about upgrading you have to ensure that your current system is running on the latest packages. This can be done easily via a terminal like so: $ sudo apt-get update && sudo apt-get -y dist-upgrade --fix-missing Next, we are going to initiate the upgrade itself: $ sudo update-manager As a result the graphical Software Updater should inform you that a newer version of Ubuntu is available for installation. Ubuntu's Software Updater informs you whether an upgrade is available Running the upgrade After clicking 'Upgrade...' you will be presented with information about the new version. Details about Ubuntu 13.10 (Saucy Salamander) Simply continue with the procedure and your system will be analysed for the next steps. Analysing the existing system and preparing the actual upgrade to 13.10 Next, we are at the point of no return. Last confirmation dialog before having a coffee break while your machine is occupied to download the necessary packages. Not the best bandwidth at hand after all... yours might be faster. Are you really sure that you want to start the upgrade? Let's go and have fun! Anyway, bye bye Raring Ringtail and Welcome Saucy Salamander! In case that you added any additional repositories like Medibuntu or PPAs you will be informed that they are going to be disabled during the upgrade and they might require some manual intervention after completion. Ubuntu is playing safe and third party repositories are disabled during the upgrade Well, depending on your internet bandwidth this might take something between a couple of minutes and some hours to download all the packages and then trigger the actual installation process. In my case I left my PC unattended during the night. Time to reboot Finally, it's time to restart your system and see what's going to happen... In my case absolutely nothing unexpected. The system booted the new kernel 3.11.0 as usual and I was greeted by a new login screen. Honestly, 'same' system as before - which is good and I love that fact of consistency - and I can continue to work productively. And also Software Updater confirms that we just had a painless upgrade: System is running Ubuntu 13.10 - Saucy Salamander - and up to date See you in six months again... ;-) Post-scriptum In case that you would to upgrade to the latest development version of Ubuntu, run the following command in a console: $ sudo update-manager -d And repeat all steps as described above.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • iphone - direct link to iPhone review form from inside iphone

    - by Mike
    I am trying to link directly to the review link of one of my Apps. I know that it is possible because Appirater did it in the past, but some change in iTunes turned the API down. Appirater uses this URL NSString *templateReviewURL = @"itms-apps://itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?id=APP_ID&onlyLatestVersion=true&pageNumber=0&sortOrdering=1&type=Purple+Software"; where APP_ID is the ID of an application. running this from inside the APP gives me the message Cannot Connect to iTunes Store. This Page talks about another kind of link https://userpub.itunes.apple.com/WebObjects/MZUserPublishing.woa/wa/addUserReview?id=APP_ID&type=Purple+Software and also itms-apps://ax.itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?type=Purple+Software&id=APP_ID The first one works, but just from the desktop mac. The second gives me the same error as the first... Cannot Connect to iTunes Store. iTunes link maker is not helping too, because it has no tools for iPad links... Do you guys know how to link to an app's review form from inside an app? In case you don't know, what kind of package should I use to dig this? a package sniffer? thanks for any help.

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Require CoreTelephony framework examples.

    - by prathumca
    Greetings everyone. Can any one has a working example for the CoreTelephony framework? I dumped all the CoreTelephony headers using class-dump and added them to "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk/System/Library/PrivateFrameworks/CoreTelephony.framework". Now I'm following the Erica's tutorial (http://blogs.oreilly.com/iphone/2008/08/iphone-notifications.html). I added these following lines of code in my main.m, id ct = CTTelephonyCenterGetDefault(); CTTelephonyCenterAddObserver( ct, NULL, callback, NULL, NULL, CFNotificationSuspensionBehaviorHold); but I'm getting a warning like, Implicit declaration of function "CTTelephonyCenterGetDefault()" and "CTTelephonyCenterAddObserver(...)". Can any one has full working example, which will explain how to get the CoreTelepony notifications?

    Read the article

  • SQL Server 2005 - Syncing development/production databases

    - by hamlin11
    I've got a rather large SQL Server 2005 database that is under constant development. Every so often, I either get a new developer or need to deploy wide-scale schema changes to the production server. My main concern is deploying schema + data updates to developer machines from the "master" development copy. Is there some built-in functionality or tools for publishing schema + data in such a fashion? I'd like it to take as little time as possible. Can it be done from within SSMS? Thanks in advance for your time

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • Xcode duplicate symbol _main .

    - by Prithvi Raj
    I'm getting the following error in Xcode 3.2.1 on Snow Leopard 10.6.2 whenever I try to compile any iPhone application generated by Appcelerator's Titanium . However , the build error only appears when I select iPhone simulator on the architecture menu , and if I select iPhone device , I am able to run the app on my device . Further , the iPhone simulator launches successfully and executes the program directly from the Titanium environment , which uses Xcode to build . Why is this happening ? ld: duplicate symbol _main in Resources/libTitanium.a(main.o) and /Users/prithviraj/Documents/project/Final/build/iphone/build/Final.build/Debug-iphonesimulator/Final.build/Objects-normal/i386/main.o collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1

    Read the article

  • Android: How to obtain Mac Address of WiFi Network Interface?

    - by Gubatron
    It seems the java.net.NetworkInterface implementation of android does not have a byte[] getHardwareAddress() method http://developer.android.com/reference/java/net/NetworkInterface.html I've found several forums of people trying to do this with no definitive answer, I need to get a somewhat cross-device UUID, so I can't rely on phone numbers or in ANDROID_ID (which can be overwritten and which I think depends on the user having a google account) http://developer.android.com/reference/android/provider/Settings.Secure.html#ANDROID_ID In linux you can use ifconfig or read from /proc/net/arp and you can easily get the Hardware address. Is there a file in android that I can read? There has to be a way to get this address since it's shown in the "Settings About Phone Status" of the phone.

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • Will you use Delphi Prism

    - by Mohammed Nasman
    CodeGear announces that their Next .Net product which is known as Delphi Prism Will be RemObjects's Oxygene. Oxygene has many nice features that not found in Delphi or C#, and I think it will be a more effective solution for .Net than Delphi .Net previous releases, but it's uses Visual Studio IDE instead of Delphi IDE. which has some cons and pros. As Delphi Developer or .Net Developer, do you consider to use Delphi Prism for .Net developmenet? Look at these Links for more info. Delphi Prism vs. CSharp Delphi Prism - Visual Studio Pascal For .NET Delphi Prims home page

    Read the article

  • How can I install the BlackBerry v5.0.0 component pack into Eclipse?

    - by Dan J
    I'm trying to install the latest v5.0.0 "beta 2" BlackBerry OS Component Pack into Eclipse 3.4.2 with BlackBerry Eclipse plugin v1.0.0.67, but have hit a few problems. Has anybody found an easy way to do this? I had no trouble installing the v4.5.0 and v4.7.0 Component Packs. It's rather strange that BlackBerry are shipping new phones with the v5.0.0 OS installed (e.g. a Storm 2 9550 and Bold 9700 that I just bought), and pushing that update to phones whilst the BlackBerry website still considers the v5.0.0 SDK / Component Packs to be "beta 2"! If anybody knows when an official non-beta Component Pack is going to be released that might solve my problem... In case it helps, the problems I've hit so far are: -Contrary to the implication on the BlackBerry website, the Eclipse "Software Update..." option for the v5.0.0 Component Pack claims it only works on the v1.0.0 Eclipse BlackBerry plugin, not the new v1.1 one. -I then tried to install the v5.0.0 Component Pack through the "Software Updates..." menu in Eclipse using the v1.0.0 Eclipse BlackBery plugin. Once I'd done the 200MB download the install failed with a "Invalid zip file format" error. -I might just have been unlucky with a corrupted download but I did try it twice, once through "Software Updates..." and once by selecting "Archive" to install the downloaded Component Pack (which unlike v4.5.0 and v4.7.0 was a JAR, not a ZIP).

    Read the article

  • Eclipse: Cannot install CDT because of a conflicting dependency

    - by MarkZar
    my eclipse has been configured for Java and pydev, now i want to configure C/C++ development tools with Eclipse. i dont want to download the whole Eclipse IDE for C/C++ Developers, for it is not convenient. so i decided to install CDT in my Eclipse. Help == Install New Software, then input http://download.eclipse.org/tools/cdt/releases/indigo, waited for a while, and chose the following CDT Main Features, CDT Optional Features, and Next, then an error occurred. Cannot complete the install because of a conflicting dependency. Software being installed: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) Software being installed: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) Only one of the following can be installed at once: GDB DSF Debugger Integration Core 4.0.0.201106081058 (org.eclipse.cdt.dsf.gdb 4.0.0.201106081058) GDB DSF Debugger Integration Core 4.0.2.201202111925 (org.eclipse.cdt.dsf.gdb 4.0.2.201202111925) GDB DSF Debugger Integration Core 4.0.1.201109151620 (org.eclipse.cdt.dsf.gdb 4.0.1.201109151620) Cannot satisfy dependency: From: C/C++ Development Tools 8.0.2.201202111925 (org.eclipse.cdt.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.gnu.dsf.feature.group [4.0.1.201202111925] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) To: org.eclipse.cdt.dsf.gdb [4.0.0.201106081058] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.1.201202111925 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.1.201202111925) To: org.eclipse.cdt.dsf.gdb [4.0.2.201202111925] Cannot satisfy dependency: From: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.feature.group [8.0.2.201202111925] i have googled for a lot of time, but still cannot find a valid solution. can anyone give a hand to me? thanks a lot!

    Read the article

  • Framework not found CoreServices

    - by Matthew Brindley
    I'm trying to use CoreServices on the iPhone simulator (I don't need it to run on a device), but my project won't build because of the error: ld: framework not found CoreServices Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1 I've added the framework to my project, under frameworks. I also read you need to include CFNetwork.framework, so I've added that too. If I remove CoreServices.framework, the compiler complains about undeclared symbols, as you'd expect. So it seems to use the framework to compile those classes, but then complains it can't find the framework.

    Read the article

  • iphone problem - UIKit precompile error

    - by tigermain
    I've been working on a project for a few weeks now and today is the last day before it goes to client (typical). For some reason Im suddenly getting the following error when building my project (sim or device), even though other projects build fine /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk/System/Library/Frameworks/UIKit.framework/Headers/UIKit.h:61:42: error: UIKit/UIVideoEditorController.h: No such file or directory The last thing I was trying to do was use the UIMessage framework for email, but I have stripped that out!?!? N.B I have checked and the .h file does appear to be in the same folder as the UIKit.h file

    Read the article

  • Advantages of compilers for functional languages over compilers for imperative languages

    - by Onorio Catenacci
    As a follow up to this question What are the advantages of built-in immutability of F# over C#?--am I correct in assuming that the F# compiler can make certain optimizations knowing that it's dealing with largely immutable code? I mean even if a developer writes "Functional C#" the compiler wouldn't know all of the immutability that the developer had tried to code in so that it couldn't make the same optimizations, right? In general would the compiler of a functional language be able to make optimizations that would not be possible with an imperative language--even one written with as much immutability as possible?

    Read the article

  • Pointing non-www to a spcific sub-directory

    - by Ben Sinclair
    I might be going about this all wrong so let me know if I am. I am creating software that allows people to sign up and have their own sub-domain on my website. So say my website is ben.com, they could have their own sub-domain called juice.ben.com. When they type their sub-domain juice.ben.com in their address bar, it will load the contents in a root directory. I have also set-up a .htaccess redirect to redirect www.ben.com to ben.com. Not sure if this matters with my question but I thought I'd mention it. Ok, so basically what I think I need to do is put the software they they've signed up to in the root directory. So when someone goes to juice.ben.com, they will be pointed to the root directory (I beleive I cans et-up wild card sub-domains with my host) and the software will then analyse their sub-domain and then display their account. Now, if someone just types in ben.com into their browser, I want it to show the contents of the ben.com/_website/ folder but still show in the address bar that they are still in the root directory. Hopefully I am making sense :) Is this possible with htaccess? If so, what do I need to do?

    Read the article

  • Clickable widgets in android

    - by Leif Andersen
    The developer documentation has seemed to have failed me here. I can create a static widget without thinking, I can even create a widget like the analogue clock widget that will update itself, however, I can not for the life of me figure out how to create a widget that reacts to when a user clicks on it. Here is the best code sample that the developer documentation gives to what a widget activity should contain (the only other hint being the API demos, which only creates a static widget): public class ExampleAppWidgetProvider extends AppWidgetProvider { public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { final int N = appWidgetIds.length; // Perform this loop procedure for each App Widget that belongs to this provider for (int i=0; i<N; i++) { int appWidgetId = appWidgetIds[i]; // Create an Intent to launch ExampleActivity Intent intent = new Intent(context, ExampleActivity.class); PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0); // Get the layout for the App Widget and attach an on-click listener to the button RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.appwidget_provider_layout); views.setOnClickPendingIntent(R.id.button, pendingIntent); // Tell the AppWidgetManager to perform an update on the current App Widget appWidgetManager.updateAppWidget(appWidgetId, views); } } } from: The Android Developer Documentation's Widget Page So, it looks like pending intent is called when the widget is clicked, which is based off of an intent (I'm not quite sure what the difference between an intent and a pending intent is), and the intent is for the ExampleActivity class. So I made my sample activity class a simple activity that when created, would create a mediaplayer object, and start it (it wouldn't ever release the object, so it would eventually crash, here is it's code: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); MediaPlayer mp = MediaPlayer.create(getApplicationContext(), R.raw.sound); mp.start(); } However, when I added the widget to the home screen, and clicked on it, nothing played, in fact, nothing played when I set the update timer to just a few hundred milliseconds (in the appwidget provider xml file). Furthermore, I set break points and found out that not only was it never reaching the activity, but no break points I set would ever get triggered. (I still haven't figured out why that is), however, logcat seemed to indicate that the activity class file was being run. So, is there anything I can do to get an appwidget to respond to a click? As the onClickPendingIntent() method is the closest I have found to a onClick type of method. Thank you very much.

    Read the article

  • Object Oriented database development jobs

    - by GigaPr
    Hi, i am a software engineering student currently looking for a job as developer. I have been offered a position in a company which implements software using object oriented databases. these are something completely new for me as at university we never worked on it, just some theory. my questions are do you think is a good way to start my career as developer? what is the job market for this type of developemnt? are these skills requested? what markets this technology touches? thanks

    Read the article

  • Scrapy issue with iTunes' AppStore

    - by Eric
    I am using Scrapy to fetch some data from iTunes' AppStore database. I start with this list of apps: http://itunes.apple.com/us/genre/mobile-software-applications/id36?mt=8 In the following code I have used the simplest regex which targets all apps in the US store. from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.contrib.spiders import CrawlSpider, Rule class AppStoreSpider(CrawlSpider): domain_name = 'itunes.apple.com' start_urls = ['http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8'] rules = ( Rule(SgmlLinkExtractor(allow='itunes\.apple\.com/us/app'), 'parse_app', follow=True, ), ) def parse_app(self, response): .... SPIDER = AppStoreSpider() When I run it I receive the following: [itunes.apple.com] DEBUG: Crawled (200) <GET http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8> (referer: None) [itunes.apple.com] DEBUG: Filtered offsite request to 'itunes.apple.com': <GET http://itunes.apple.com/us/app/bloomberg/id281941097?mt=8> As you can see, when it starts crawling the first page it says: "Filtered offsite request to 'itunes.apple.com'". and then the spider stops.. it also returns this message: [ScrapyHTTPPageGetter,client] /usr/lib/python2.5/cookielib.py:1577: exceptions.UserWarning: cookielib bug! Traceback (most recent call last): File "/usr/lib/python2.5/cookielib.py", line 1575, in make_cookies parse_ns_headers(ns_hdrs), request) File "/usr/lib/python2.5/cookielib.py", line 1532, in _cookies_from_attrs_set cookie = self._cookie_from_cookie_tuple(tup, request) File "/usr/lib/python2.5/cookielib.py", line 1451, in _cookie_from_cookie_tuple if version is not None: version = int(version) ValueError: invalid literal for int() with base 10: '"1"' I have used the same script for other website and I didn't have this problem. Any suggestion?

    Read the article

  • Error while dynamically loading mapi32.dll

    - by The_Fox
    Our application uses Simple MAPI to send e-mails. One of our clients has problems sending e-mail from a session on his terminal server. The mapi32.dll is loaded with a call to LoadLibrary which succeeds, but then our application tries to get the addresses of the functions MAPILogon, MAPILogOff, MAPISendMail, MAPIFreeBuffer and MAPIResolveName. The problem is that GetProcAddress fails for those functions with an ERROR_ACCESS_DENIED (code: 5) except for MAPIFreeBuffer. It looks like some sort of security thing. How can I fix this or should I use another method to send mail? FWI, here some more information about OS and contents of registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem: OS info: 5.2.3790 VER_PLATFORM_WIN32_NT Service Pack 2 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem InstallCmd: rundll32 setupapi,InstallHinfSection MSMAIL 132 msmail.inf MAPI: 1 CMCDLLNAME: mapi.dll CMCDLLNAME32: mapi32.dll CMC: 1 MAPIX: 1 MAPIXVER: 1.0.0.1 OLEMessaging: 1 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem\MSMapiApps inetsw95.exe: choosusr.dll: msab32.dll: nwab32.dll: outstore.dll: Microsoft Outlook CDOEXM.DLL: EMSMDB32.DLL: EMSABP32.DLL: newprof.exe: Microsoft Outlook outlook.exe: wfxmsrvr.exe: Microsoft Outlook msexcimc.exe: exchng32.exe: schdmapi.dll: Microsoft Outlook pilotcfg.exe: Microsoft Outlook mailmig.exe: Microsoft Outlook admin.exe: msspc32.dll: Microsoft Outlook cnfnot32.exe: Microsoft Outlook ilpilot.exe: Microsoft Outlook events.exe: I'm on Delphi 7.0, but that shouldn't matter. Edit, added version information: Fileversion info of C:\WINDOWS\system32\mapi32.dll Fileversion: 6.5.7226.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32 Comments=Service Pack 1 LegalCopyRight=Copyright (C) 1986-2003 Microsoft Corp. All rights reserved. LegalTradeMarks=Microsoft(R) and Windows(R) are registered trademarks of Microsoft Corporation. OriginalFileName=MAPI32.DLL ProductName=Microsoft Exchange ProductVersion=6.5 Fileversion info of C:\Program Files\Common Files\SYSTEM\MSMAPI\1043\msmapi32.dll Fileversion: 11.0.5601.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32.DLL LegalCopyRight=Copyright © 1995-2003 Microsoft Corporation. All rights reserved. OriginalFileName=MAPI32.DLL ProductName=MAPI32 ProductVersion=11.0.5601

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >