Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 359/853 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Ubuntu on Pandaboard giving me troubles

    - by Jeroen Jacobs
    I'm trying to install the OMAP4 extras for ubuntu on my pandaboard. For some reason, a few packages can't seem to be agree with eachother. This what I did so far: installed on Ubuntu 11.10 on sd card Powered on Pandaboard and let it finish it's initial install Did an "apt-get update" and "apt-get upgrade", to install updates So far, everything went fine, and I was quite happy with my Pandaboard, but then I made the mistake of typing this: apt-get install ubuntu-imap4-extras At first, everything seemed ok, and it started downloading and installing. But then after a while it just crashed. I tried it again but then it gave me this: Reading package lists... Done Building dependency tree Reading state information... Done ubuntu-omap4-extras is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: gstreamer0.10-openmax : Depends: gstreamer0.10-plugins-bad but it is not going to be installed gstreamer0.10-plugin-ducati : Depends: gstreamer0.10-plugins-bad but it is not going to be installed ubuntu-omap4-extras-multimedia : Depends: gstreamer0.10-plugins-bad (>= 0.10.22-2ubuntu4+ti1.5) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to the suggestion: apt-get -f install: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: gstreamer0.10-plugins-bad The following NEW packages will be installed: gstreamer0.10-plugins-bad 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 88 not fully installed or removed. Need to get 0 B/1,794 kB of archives. After this operation, 4,571 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 143575 files and directories currently installed.) Unpacking gstreamer0.10-plugins-bad (from .../gstreamer0.10-plugins bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb) ... dpkg: error processing /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb (--unpack): trying to overwrite '/usr/lib/libgstbasecamerabinsrc-0.10.so.0.0.0', which is also in package gstreamer0.10-plugins-good 0.10.30-1ubuntu7.1 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Seems like two packages (plugins-good and plugins-bad) are fighting over the same library. Any idea on how to fix this??

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – Remus Rusanu’s session on Service Broker to process NAP data, and Matt Masson’s presentation on high performance SSIS...(read more)

    Read the article

  • Week in Geek: Study finds Men more Likely to Fall for Facebook Scams

    - by Asian Angel
    This week we learned how to “read Blue Screen codes, clean your computer, & get started with scripting”, upgrade or install Mac OS X Lion on a Hackintosh using UniBeast, use Amazon’s barcode scanner to easily buy anything from your phone, had fun with a great set of geeky do-it-yourself projects for pets, got introduced to How-To Geek’s new Google+ account, and more. Photo by mac_filko. Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • I have permanent connections to Canonical servers, what are they for?

    - by Dan Dman
    After the recent upgrade to 12, I notice permanent connections to canonical servers. Running netstat -tp gives: Foreign Address State PID/Program name mulberry.canonical:http CLOSE_WAIT 6537/ubuntu-geoip-p alkes.canonical.co:http CLOSE_WAIT 6667/python alkes.canonical.co:http CLOSE_WAIT 6667/python Why are there permanent connections and how could I stop this behavior? And if this is intentional, who is responsible? I would like to understand why this was done because to me it seems like a bad idea.

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • Where could one go to suggest new packages for Ubuntu

    - by Luis Alvarado
    Sometimes one finds a package that looks very good and can help in some ways, but the package is not found in the repositories of Ubuntu or in any PPA site. Apart from creating a PPA, how and where can one suggest a package to be included to Ubuntu in a future or present version. Some quick examples might be: Komodo (Another alternative to Notepad++): http://www.liberiangeek.net/2012/01/komodo-edit-the-best-notepad-alternative-in-ubuntu/ mkahawa (Cafe Management System): http://sourceforge.net/projects/mkahawa/ And many others that can help in making users use Ubuntu more.

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Unable to access other Volume in Vaio E series

    - by Rahul Ravi Kumar Shah
    Error mounting /dev/sda6 at /media/ravi/New Volume: mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda6" "/media/ravi/New Volume" Exited with: non-zero exit status 14: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda6': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option.

    Read the article

  • Webcast : Les nouveautés de Microsoft Visual Studio 2010, organisé par SOS developers le mardi 22 ju

    Webcast : Les nouveautés de Microsoft visual studio 2010 Comsoft / SOS Developer's vous invite à découvrir les nouveautés de Microsoft Visual Studio 2010 le mardi 22 juin 2010, de 11h00 à 12h00 à travers une présentation web. Microsoft vous fera découvrir les évolutions de la nouvelle version Microsoft Visual Studio 2010 grâce à un expert qui sera mis à votre disposition. Vous aurez la possibilité de vous familiariser avec les fonctionnalités de Test : IntelliTrace, Lab Management? Microsoft visual studio 2010 vous invite aussi à découvrir le Team Foundation Server et la collaboration de vos équipes de développement, y compris le développement Parallèle, Cloud, Sharepoint

    Read the article

  • Learn more about SPARC by listening to our newly recorded podcasts

    - by Cinzia Mascanzoni
    Please listen to our newly recorded series of four podcasts focused on SPARC. The topics are: How SPARC T4 Servers Open New Opportunities SPARC Roadmap and SPARC T4 Architecture Highlights SPARC T4 For Installed Base Refresh and Consolidation SPARC T4 – How Does it Stack up Against the Competition? Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant, are your hosts. The intent is to continue to help you understand how to position and sell SPARC/T4 into your customer architecture.Details on how to access these podcasts can be found here.

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 13-19, 2013

    - by OTN ArchBeat
    The list below represents that Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 13-19, 2013, as determined by the clicks, likes, and other activities among the 4,425 fans of that page. Going Mobile with ADF – Implementing Data Caching and Syncing for Working Offline | Steven Davelaar Oracle Fusion Middleware A-Team solution architect Steven Davelaar takes you on a deep dive into how to use ADF Mobile to create an on-device application that supports working in offline mode. OOW 2013 Summary for Fusion Middleware Architects & Administrators | Simon Haslam Oracle ACE Director Simon Haslam shares a very thorough and detailed summary of the most interesting news coming out of Oracle OpenWorld 2013 for Fusion Middleware architects and administrators. Coherence Special Interest Group (SIG) – Sydney, October 24th If you're in the neighborhood... The Coherence Special Interest Group (SIG) in Sydney, Australia will be held on Thursday October 24th at the Park Hyatt Sydney, in The Rocks, between 9am and 5pm. The event will include presentations from customers, partners, and Coherence engineering team members and product managers. Click the link for more info. Free eBook: Oracle Multitenant for Dummies Oracle Multitenant for Dummies is a new e-book that provides a clear overview of the Oracle Database 12c multitenant architecture. It's free (registration required). Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 1: Introduction | Michael Rainey Michael Rainey launches a series of posts that guide you through "the architecture and setup for using GoldenGate with OBIA 11.1.1.7.1." Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Oracle ACE Director Lucas Jellema. Webgate Reverse Proxy Farm | Vinay Kalra Vinay Kalra's blog post discusses architecture and recommendations for centralizing Webgate deployments onto a server farm. Free Poster: Adaptive Case Management in Practice Thanks to Masons of SOA member Danilo Schmiedel for providing a hi-res copy of the Adaptive Case Management poster, now available for download from the OTN ArchBeat Blog. Should your team use a framework? | Sten Vesterli "Some developers have an aversion to frameworks, feeling that it will be faster to just write everything themselves," observes Oracle ACE Director Sten Vesterli. He explains why that's a very bad idea in this short post. Integrating Custom BPM Worklist into WebCenter Portal | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares a sample application configured to run a custom BPM Worklist, and shares steps describing how to configure and access it from the WebCenter Portal. Thought for the Day "Morning comes whether you set the alarm or not." — Ursula K. Le Guin (Born October 21, 1929) Source: brainyquote.com

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • Is it a bug or a task when something doesn't work, yet, in development process

    - by Patkos Csaba
    We usually have this dilemma in our team. Sometimes, in order to implement a task or a story we find out that the system must be in a specific state. For example, a specific system configuration has to be made beforehand. The task / story can be completed and it is working as specified on it with the proper configuration in place. Note that the configuration is not directly related with the task. Next, we have to create a new ... ??? ... something for the process of generating that configuration file. This is where the problems appear. Some say that it is a bug others say it is a task or an extra feature. So, where is the limit between bugs and tasks in the development phase? Should we even consider something a bug if all the tasks are working as stated in their definitions? Can a thing be considered a bug because one compares it to the current (unstable) state of the system? Short example: A feature requires configuring a communication service for a specific operation. In the process of the implementation the team discovers that the service requires the hostnames of the pears to be resolvable to an IP address. The team adds the hostnames to the DNS server (or hosts files) and continues implementing the required feature. After the initial feature is working, a question is risen. Should the sysadmin configure the DNS or hosts file or should our application do it automatically? An automatic solution is possible. So a decision is made to implement it. ... here start the discussions ... is this a bug or an extra feature / task? PS: I know that I mixed feature / task / story in the question. It is intentional. I am interested in separating bugs from the rest. Doesn't matter what the rest means in a particular case.

    Read the article

  • Help, i cant reference my vars!

    - by SystemNetworks
    I have a sub-class(let's call it sub) and it contains all the function of an object in my game. In my main class(Let's call it main), i connect my sub to main. (Example sub Code: s = new sub(); Then I put my sub function at the update method. Code: s.myFunc(); Becuase in my sub, i have booleans, integers, float and more. The problem is that I don't want to connect my main class to use my main's int, booleans and others. If i connect it, it will have a stack overflow. This is what I put in my sub: Code: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Armory { package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Store { public Integer wood; public Float probePositionX; public Float probePositionY; public Boolean StoreOn; public Boolean darkBought; public Integer money; public Integer darkEnergy; public Integer lifeLeft; public Integer powerLeft; public void darkStores(GameContainer gc, StateBasedGame sbg, GameContainer gc2) { Input input1 = gc.getInput(); //Player need wood to enter(200) If not there will be an error. if(wood>=200) { //Enter Store! if(input1.isKeyDown(Input.KEY_Q)) { //Player must be in this cord! if((probePositionX>393 && probePositionX<555) && (probePositionY< 271 && probePositionY>171)) { //The Store is On StoreOn=true; } } } } } In my main (update function) I put: Code: s.darkBought = darkBought; s.darkEnergy = darkEnergy; s.lifeLeft = lifeLeft; s.money = money; s.powerLeft = powerLeft; s.probePositionX = probePositionX; s.probePositionY = probePositionY; s.StoreOn = StoreOn; s.wood = wood; s.darkStores(gc, sbg, gc); The problem is when I go to the place, and I press q, nothing shows up. It should show another image. Is there anything wrong???

    Read the article

  • Evaluating Scrum - is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Ubuntu Install 11.10 doesn't recognize Windows 7 installation with new HDD

    - by arlendo
    Replaced my crashed HDD with a Seagate 2TB Sata (bought from a company who pulled it from a working computer, OS unknown) and did a fresh install of Windows 7. Windows shows 100MB boot partition (bootable NTFS) and 200GB Windows partition (NTFS), the rest is unallocated. Win7 Disk Management says the partitioning type is Master Boot Record. Win7 boots and runs fine. Ubuntu 11.10 Install procedes to Allocate Drive Space screen and should say This computer currently has Windows 7 on it. What would you like to do? Instead, it says something like Install doesn't detect any existing OS on this computer. When I click on Something else, the partition table shows only the unallocated space of 1.8TB. Ubuntu Disk Utility says Partitioning: Master Boot Record, but GParted Live says Partition Table: gpt. It was my original intention to have the Windows boot partition and application partition, then install Ubuntu 11.10 using boot, root, swap, and home partitions, and maybe another partition just for data (mostly photos). Currently, I would be happy if I could just get Ubuntu installed along with Win7. I am aware of the MBR limits of 3 Primary partitions and 1 Extended partition. I suspect that my new HDD is partitioned for GPT and that is why Ubuntu can't see the Win7 installation. Am I on the right track? I was going to use Windows Disk Management to convert GPT to MBR but I only have the one drive on my AMD-64 mini-computer and it says I have to empty the drive of all partitions before I can access the Convert command. And I can't find any bootable software that would allow me to do that conversion. Here is the result of sudo fdisk -l: ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 224 heads, 19 sectors/track, 918004 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd4a68c18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 419637247 209715200 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Keep in mind that I'm a definite newbie to screwing around with the inner workings of Ubuntu. I previously had Ubuntu 10.04 running with Vista and I don't remember even having to partition anything that wasn't automatic in the install. Thanks for taking a look here. My Win7 is running fine but I miss my Ubuntu.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >