Search Results

Search found 35692 results on 1428 pages for 'build process'.

Page 183/1428 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Restart mysql keeping the data

    - by sitonico
    Hi all, I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)

    Read the article

  • crash with CoUnintialize() api of COM

    - by jbp117
    My main process(main.exe) initilaized COM library and created a thread which creates a new process(p1.exe) this new process is again initializing COM library and after making all references as zero unintialized COM here.. the unintailezed COM in the main process (i.e main.exe) also.... When i run p1.exe individually it is successful.. but it is crashing when i create a process for p1.exe from main.exe

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

    - by Ann Donahue
    AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1. It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values. In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency. What is happening in Essbase when a user logs in: ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Partnering with a web designer to build and launch websites for fun. Where should I look for someone? [closed]

    - by FastCoder
    I have been working with web sites and web development for a long time and as a result I am able to build and launch complex websites (social network, dating site, stackoverflow, etc.) in little time (1-3 weeks). Problem is: I know very little about CSS, page layout, photoshop and graphic design. Of course I know HTML but when it comes to putting together something that looks good I am horrible. I just don't have the artistic skills. I wanted to launch some websites without any silly or naive intent to take over the world. Just for fun and to improve the portfolio. How do you guys recommend that I approach this problem of "finding this right partner" with the same mindset? Where should I look for this person? I have no idea... :(

    Read the article

  • CodePlex Daily Summary for Tuesday, July 16, 2013

    CodePlex Daily Summary for Tuesday, July 16, 2013Popular ReleasesOsmSharp: OsmSharp v3.1.4840.2: Release notes: - Fixed issue: http://osmsharp.codeplex.com/workitem/1246 'Optimized route has zero TotalDistance' - Fixed lazy loading strategy for datasource routing. - Fixed precision bug in View. - Added unittests for instruction generation. - Added daily build nuget packages: Daily build packages at http://5.9.56.3:8080/guestAuth/app/nuget/v1/FeedService.svc/ - Fixed default direction on roundabout: roundabouts are always oneway. - Take average left+right turn. - Fixed UTF8 encoding issu...PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...WatchersNET.SiteMap: WatchersNE​T.SiteMap 01.03.08: changes Localized Tab Name is now usedStackWalker - Walking the callstack: StackWalker - 2013-07-15: This project describes and implements the (documented) way to walk a callstack for any thread (own, other and remote). It has an abstraction layer, so the calling app does not need to know the internals. This release has some bugfixes for VC5/6.GoAgent GUI: GoAgent GUI 1.1.0 ???: ???????,??????????????,?????????????? ???????? ????: 1.1.0? ??? ?????????? ????????? ??:??cn/hk????,?????????????????????。Hosts??????google_hk。Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...TypePipe: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of the TypePipe for .NET 4.5. Find the complete release notes for the build here: Release Notes.re-linq: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of re-linq for .NET 4.5. Find the complete release notes for the build here: Release Notes To use re-linq with .NET 3.5, use a 1.13.x build.Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsDataDevelop: Beta 0.6.5: Hotfix bug in Python Table.ImportAll method Updated External Libraries Fixes in Excel Exportation Modify ConnectionString refreshes the Properties Window correctlyUser Group Labs: User Group Data: 01.00.00: This release has the following updates and new features: Initial release with a minimal feature set Easy to use (just add to the social group details page) Edit common user group properties System Requirements DNN v07.00.02 or newer .Net Framework v4.0 or newerCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130713): Product documentation and additional templates for this version is included in the zip archive, or if you want individual files, visit the http://www.carrotware.com website. Templates, in addition to those found in the download, can be downloaded individually from the website as well. If you are coming from earlier versions, make a precautionary backup of your existing website files and database. When installing the update, the database update engine will create the new schema items (if you...Dalmatian Build Script: Dalmatian Build 0.1.3.0: -Minor bug fixes -Added Choose<T> and ChooseYesNo to Console objectPushover.NET: Pushover.NET - Stable Release 10 July 2013: This is the first stable release of Pushover.NET. It includes 14 overloads of the SendNotification method, giving you total flexibility of sending Pushover notifications from your client. Assembly is built to .NET 2.x so it can be called from .NET 2.x, 3.x and 4.x projects. Also available is the Test Harness. This is a small GUI that you can use to test calls to Pushover.NET's main DLL. It's almost fully functional--the sound effects haven't been fully configured so no matter what you pick ...MCEBuddy 2.x: MCEBuddy 2.3.14: 2.3.14 BETA is available through the Early Access Program.Click here https://mcebuddy2x.codeplex.com/discussions/439439 for details and to get access to Early Access Program to download latest releases. Changelog for 2.3.14 (32bit and 64bit) NEW FEATURES: 1. ENHANCEMENTS: 2. Improved eMail notifications 3. Improved metrics details 4. Support for larger history (INI) file (about 45,000 sections, each section can have about 1500 entries) BUG FIXES: 5. Fix for extracting Movie release year from...Azure Depot: Flask: Flask Version 01New Projects(??:wwqgtxx-goagent)???goagent/??wallproxy???????????????: =*greatagent*(??:wwqgtxx-goagent)= = [downloads downloads??] = =[WhyMove ???????]= ==???goagent/??wallproxy???????????????,?????goagent?wallproxy??????????,??Azure Anonymous SAS URL Generator: The Azure Anonymous SAS URL Generator allows you to generate anonymous Shared Access Signature URLs for individual Blobs stored within a given storage containerBizTalk ESB Toolkit Enterprise Library machine.config Toggler: This tool provides an instant on/off switch for the enterpriseLibrary.ConfigurationSource changes that the BizTalk ESB Toolkit makes to machine.config.Common .NET Utilities: Just some common utilities I've developed over the years that are shared among my various projects.DIY Online System: To create a Philippine Standard Online Payroll SystemDynamics Ax CipherLib: The Dynamics Ax CipherLib is a simple X.509 certificate based cipher implementation that allows to encrypt/decrypt S/MIME or XML messages with Dynamics Ax 4.0,GpsBox: We wanted to create a device which is sending the current position and any user is able to look up the current position of it.Ingenious Framework: Ingenious Framework is an easy to use, lightweight framework that hides semantic web technologies from you as a developer, but harnesses its benefits.Lab Of Things: Lab of Things is a flexible platform for experimental research that uses connected devices in homes. Orchard Deployment: Deployment and replication of content between one or more Orchard CMS instances. Move content individually, using workflows or custom subscriptions.PluginManagement.Net: .NET Plugin Loader Framework using MEF(Microsoft Extensibility Framework). It has a generic loader which loads plugin that implements generic interface IPlugin.QlikView Management API in vb.net: For more info see: http://community.qlikview.com/docs/DOC-3647 a conversion from c# to vb.net Recover Deleted Emails Microsoft Exchange Server: Get an effective way to recover deleted emails from Exchange server and browse it with MS Outlook or MS Exchange server depending upon the users requirement.Shindo's Race System: Special Edition: Shindo's Race System: Special EditionSP Solution Builder: Simple summary of project will be added latertestfailure0716: testTFS Build Custom Activities: TFS Build Custom extensions offers you a list of TFS build activities and a WorkFlow to use them.Upida.net: If you use Asp.net MVC and any NHibernate, than Upida is for you. Upida favors knockout.js. You can download example code, implemented with knockout.js.

    Read the article

  • Page Load Time - "Waiting on..." taking ages. What part of page request process is hung?

    - by James
    I have a new cluster site running on Magento that's on a development server that is made up of 2 x web servers and 1 x database server. I have optimized the site in all areas I know (gzip, increasing php memory limits, increasing database memory limits etc) but sometimes the page loading gets stuck on 'waiting for xxx.xx.xx.xxx' (Chrome and other broswers, chrome just shows it that way). It can sit there for 40 + seconds, sometimes it just never loads and I close it in frustration. What part of the page loading process is this hung at? Is it a server issue, database issue, platform issue? I need to know where to start or whether to push the hosting provider about it.

    Read the article

  • Twitter's new approach of third party application? How would you see this move as developer.... especially you plan to build a twitter client.

    - by MobileDev123
    Just today morning I have read news that twitter has issued a warning to developers not to make any new third party client, the official announcement can be read here. As a programmer, how do you see this move of twitter? Does it seems that they want to standardize the behavior of third party client or they don't want any new client in favor of the default clients they have made? What if anybody wants to create a new client? Is there any guidelines that-if followed- ensure that we can create a new mobile client? Or we should stop thinking about it? What are the option for the developers who want to build some clients for twitter? I can realize that I have asked too many questions, but I still think that there can be one common answer.

    Read the article

  • The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the app

    - by Nick
    I am having a problem submitting my new app to the appstore. ItunesConnect gives me the error: The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the application. I haven't changed anything, I can compile to a ad-hoc certificate and that works fine. I uploaded another app yesterday and that worked fine too. All the targets and project info is set to compile to the base SDK iPhone OS 3.0. I even upgraded to the latest SDK but same result. Any ideas?

    Read the article

  • Implementing Unit Testing on the iPhone

    - by james.ingham
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged).

    Read the article

  • Silverlight 4 application failing to build in Blend 4 Beta..

    - by loyalpenguin
    Hi, I'm creating a simple silverlight application using Silverlight 4 and Expression Blend 4 Beta, but when I run the Application I get the following Error: C:\Users\WebDevelop1\Documents\Expression\Blend 4 Beta\Projects\SilverlightApplication1\SilverlightApplication1.sln.metaproj : error MSB4126: The specified solution configuration "Debug|HPD" is invalid. Please specify a valid solution configuration using the Configuration and Platform properties (e.g. MSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform="Any CPU") or leave those properties blank to use the default solution configuration. Done building project "SilverlightApplication1.sln" -- FAILED. Build failed. I'm relatively new to silverlight and not really sure where to start. This has happened with every project I have created in Blend, but when I run it in VS2010 it runs perfect. Is this a known issue with the Beta release or am I possibly missing something? Thanks in advance.

    Read the article

  • How to compress CSS/JS in VS2010 Web Deployment Build Template?

    - by RPM1984
    Hi all, We've recently upgraded from VS2008 - VS2010 (and hence upgrading from Web Deployment Project to proper deployment project). Obviously what's new in VS2010 web deployments is the introduction of Workflow as the build process template. Previously, we used a MSBuild task in the WDP to execute the Yahoo YUI Javascript/CSS compression module to minify/compress javascript and css files. Has anyone managed to accomplish this task with Visual Studio 2010? I have seen the new "SquishIt" compressor created by Justin Etheridge, but its not ideal as it "squishes" on the fly (e.g on Application_Start - Global.ascx) - which means you still have to push out all the uncompressed files to your web server before squishing. In the Workflow designer - i can see a toolbox item called "MSBuild" - just dont know how to use it to accomplish what i want. Been searching high and wide, no-one seems to know how. Surely someone out there has done this.

    Read the article

  • Will an ASP.NET MVC 2 app build with the .NET 4.0 RC and VS2010 RC run on a prod server with the .NE

    - by Chris
    I have an MVC2 app I developed with the VS2010 RC and the .NET 4.0 RC. My production server and my client's production server have .NET 4.0 RC. Can the RTM of .NET 4.0 on a server support an app developed with the RC technologies? What about the other way around? Can I use VS2010 RTM and deploy an app to production if the prod server is still no the .NET 4.0 RC? Obviously it would be ideal to synch everything up to RTM but I don't have that option right now because the client doesn't have access to VS2010 RTM yet, and they would like to be able to open and build the project.

    Read the article

  • Can Silverlight (SLOOB) start a process even with full trust?

    - by Jamey McElveen
    I have been tasked with writing an installer with a silverlight out of browser application. I need to. get the version off a local EXE check a web service to see that it is the most recent version download a zip if not unpack the zip overwrite the old EXE start the EXE This installer app is written in .NET WinForms now but the .NET framework is an obstacle for people to download. The recommended solution is to use a SLOOB however i am not sure how to assign full trust. If i assign full trust can I start a process. Thanks

    Read the article

  • VS 2010 Test Runner error "The agent process was stopped while the test was running."

    - by driis
    In Visual Studio 2010, I have a number of unit tests. When I run multiple tests at one time using test lists, I sometimes reveive the following error for one or more of the tests: The agent process was stopped while the test was running. It is never the same test failing, and if I try to run the test again, it succeeds. I found this bug report on Connect, which seems to be the same problem, but it does not offer a solution. Has anyone else seen this behaviour ? How I can avoid it ?

    Read the article

  • Problem reading from the StandarOutput from ftp.exe. Possible System.Diagnostics.Process Framework b

    - by SoMoS
    Hello, I was trying some stuff executing console applications when I found this problem handling the I/O of the ftp.exe command that everybody has into the computer. Just try this code: m_process = New Diagnostics.Process() m_process.StartInfo.FileName = "ftp.exe" m_process.StartInfo.CreateNoWindow = True m_process.StartInfo.RedirectStandardInput = True m_process.StartInfo.RedirectStandardOutput = True m_process.StartInfo.UseShellExecute = False m_process.Start() m_process.StandardInput.AutoFlush = True m_process.StandardInput.WriteLine("help") MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) This should show you the text that ftp sends you when you do that from the command line: Los comandos se pueden abreviar. Comandos: ! delete literal prompt send ? debug ls put status append dir mdelete pwd trace ascii disconnect mdir quit type bell get mget quote user binary glob mkdir recv verbose bye hash mls remotehelp cd help mput rename close lcd open rmdir Instead of that I'm getting the first line and 3 more with garbage, after that the call to ReadLine block like if there was no data available. Any hints about that?

    Read the article

  • Validate a belongs to association in a build situation.

    - by Victor Martins
    I have a Mission model that has_many Task, and the Task belongs_to Mission For security I've made this validation on the Task Model: validates_presence_of :mission_id validates_numericality_of :mission_id But the problem is that when create a Mission and add tasks like this: @mission.tasks.build The validation returns error, because the mission id on the task is null ( the mission wasn't yet created ) If I delete the validation, the Mission and Task is created successfuly, but how can I keep the validation and still have this work? I could do a callback after the save, but I don't think that's right, because I don't want to save Tasks without a mission_id. P.S. I'm hidding my mission field on the form. If I have it visible, it will show the currect mission and everything is ok. But if I hidde it the error happens. <%= f.hidden_field :mission, :label => "Missão" %> Is the form reseting the attributes given by the controller on the new action?

    Read the article

  • Better to build or buy a compute grid platform?

    - by James B
    I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded). The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this: What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement? (Please note, I don't have any need for a data grid) Cheers, -James (p.s. Happy to make this community wiki if needbe)

    Read the article

  • Microsoft Dynamics CRM -- Do people build websites with it?

    - by Marcy Sutton
    Forgive my ignorance, but do people build websites with Microsoft Dynamics CRM? I have a potential client who says that is the technology they will use for a new web project, for which I would be doing the HTML templating. I want to learn all I can as I am new to this particular system, but I can't seem to find anything related to web building and CRM. Is it more likely the client is developing another piece of technology to work with the CRM that they are neglecting to tell us about? Any experience or insight about this process is greatly appreciated!

    Read the article

  • Can iptables allow Squid to process a request, then redirect the response packets to another port?

    - by Dan H
    I'm trying to test a fancy traffic analyzer app, which I have running on port 8890. My current plan is to let any HTTP request come into Squid, on port 3128, and let it process the request, and then just before it sends the response back, use iptables to redirect the response packets (leaving port 3128) to port 8890. I've researched this all night, and tried many iptables commands, but I'm missing something and my hair is falling out. Is this even possible? If so, what iptables incantation could do it? If not, any idea what might work on a single host, given multiple remote browser clients?

    Read the article

  • How do you build a Request-Response service using Asyncore in Python?

    - by Casey
    I have a 3rd-party protocol module (SNMP) that is built on top of asyncore. The asyncore interface is used to process response messages. What is the proper technique to design a client that generate the request-side of the protocol, while the asyncore main loop is running. I can think of two options right now: Use the loop,timeout parameters of asyncore.loop() to allow my client program time to send the appropriate request. Create a client asyncore dispatcher that will be executed in the same asyncore processing loop as the receiver. What is the best option? I'm working on the 2nd solution, cause the protocol API does not give me direct access to the asyncore parameters. Please correct me if I've misunderstood the proper technique for utilizing asyncore.

    Read the article

  • Implementing Unit Testing with the iPhone SDK

    - by ing0
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged). Edit: Image URLS updated.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >