Search Results

Search found 3052 results on 123 pages for 'jawahar sync'.

Page 104/123 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Where's My Windows Azure Subscriptions

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/03/wheres-my-windows-azure-subscriptions.aspxYesterday when I opened Windows Azure manage portal I found some resources were missed. I checked the website for those missed cloud service and they are still live. Then I checked my billing history but didn't found any problem. When I back to the portal I found that all of those resource are under my MSDN subscription. So I remembered that if this is related with the recently Windows Azure platform update.   This feature named "Enterprise Management", which provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution. By default, all existing windows azure account would have a default Windows Azure Active Directory (a.k.a. WAAD) associated. In the address bar I can find the default login WAAD of my account, which is "microsoft.onmicrosoft.com". To change the WAAD we can click "subscriptions" on top of the manage portal, select the active directory from the list of "filter by directory" and select the subscription we want to see, then press "apply". As you can see, the subscription under my MSDN was located in a WAAD named "beijingtelecom.onmicrosoft.com". This is because when Microsoft applied this feature, they will check if you have an existing WAAD in your subscription. If not, it will create a new one, otherwise it will use your WAAD and move your subscription into this directory. Since I created a WAAD for test several months ago, this subscription was moved to this directory.   To change the subscription's directory is simple. First we need to create a new WAAD with the name we preferred. As below I created a new directory named "shaunxu". Then select "settings" from the left navigation bar, select the subscription we wanted to change and click "edit directory". You don't have the permission to edit/change directory unless your Microsoft Account is the service administrator of this subscription. Then in the popup window, select the WAAD you want to change and press "next". All done. You need to log off and log in the portal then your subscription will be in the directory you wanted. And after these steps I can view my resources in this subscription.   Summary In this post I described how to change subscriptions into a new directory. With this new feature we can manage our Windows Azure subscription more flexible. But there are something we need keep in mind. 1. Only the service administrator could be able to move subscription. 2. Currently there's no way for us to see our Windows Azure services in more than one directory at the same time. Like me, I can see my services under "shaunxu.onmicrosoft.com" and I must change the filter directory from the "subscriptions" menu to see other services under "microsoft.onmicrosoft.com". 3. Currently we cannot delete an existing WAAD.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Oracle OpenWorld Update: Demo Pods and Hands-on Labs

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Less than one week away until the start of Oracle OpenWorld 2012 and the Data Integration Solutions team is ready to go!  We have an exciting line up for you this year which we have summarized for you in the Oracle OpenWorld Focus on Data Integration Solutions document. In past posts we have discussed session themes and our customer panel, but today I would like to summarize our Hands-on Labs and Demo Pods that we have available for attendees. For Oracle GoldenGate Hands-On Labs we have two labs that we are running this year. Deep Dive into Oracle GoldenGate Thursday October 4th at 11:15AM in the Marriott Marquis Salon 1/2 Oracle GoldenGate provides real-time log-based change data capture and delivery between heterogeneous systems. It enables cost-effective, low-impact, real-time data integration and continuous availability solutions. This session covers Oracle GoldenGate 11g’s internal product architecture and includes a hands-on lab that covers configuration examples for target database instantiation and real-time change data capture and delivery. The participants will configure Oracle GoldenGate to instantiate a secondary database that can be used for disaster recovery or a reporting instance. Come learn how easy it is to use and how this can be a very valuable and easy technology solution for your organization. Introduction to Oracle GoldenGate Veridata Wednesday October 3rd 10:15AM in the Marriott Marquis Sales 1/2 Oracle GoldenGate Veridata compares one set of data with another and identifies data that is out of synchronization. In this hands-on lab, you will be introduced to the key features of this product. Using the Oracle GoldenGate Veridata Web client, you will have the opportunity to configure comparison objects and rules, initiate a comparison, review the status and output of a comparison, and review out-of-sync data. As a bonus this year, we have recorded the labs and made them available on youtube.com/oraclegoldengate. These will be available the day of the labs. Our demo pods are an opportunity for attendees to see our products but more so to meet the product management and development teams. I would like to point out that we have two Oracle GoldenGate 11gR2 demo pods, one in the database camp and the other in the middleware camp. The one in the middleware camp will be focused on all platforms while the one in the database camp will have a focus on the Oracle platform. The other two I would like to point out are the Monitoring Oracle GoldenGate and the Oracle Enterprise Manager demo pods; both of these pods will focus on methods to monitor GoldenGate but the OEM demo pod will have a specific focus on the Oracle GoldenGate Management Pack plug-in for OEM. Below is a list of our demo pods and their locations. Monitoring Oracle GoldenGate for End-to-End Visibility Moscone South, Right - S-241 Oracle Data Integrator and Oracle GoldenGate for Oracle Applications Moscone South, Right - S-240 Oracle GoldenGate 11gR2 New Features Moscone South, Right - S-239 Oracle GoldenGate 11gR2: Real-Time, Transactional Database Replication     Moscone South, Left - S-027 Oracle GoldenGate Veridata and Adapters Moscone South, Right - S-242 Oracle Enterprise Manager Moscone South, Left - S-040 Keep tuned to our blog during the show for news and highlights from the Data Integration Solutions team. See you there.

    Read the article

  • Impressions of my ASUS eee slate EP121 - Dual core 4GB, 64GB SSD

    - by tonyrogerson
    This thing is lovely, very nice bluetooth keyboard that has nice feedback on the keypress, there is no mouse but you can use the stylus or get yourself a bluetooth mouse, me, I've opted for a Microsoft ARC mouse which is a delight to use, the USB doors are a pain to open for the first time if like me you don't have any finger nails. It came as a suprise that the slate shows four processors, Dual Core with multi-threading, I didn't really look at the processor I was more interested in the amount of memory and the SSD; you don't get the full 4GB even with the 64 bit version of Windows 7 installed (which I immediately upgraded to Ultimate through my MSDN subscription). The box is extremely responsive - extremely, it loads Winword in literally a second. I've got office 2010 and onenote 2010 on there now; one problem is that on applying all (43) windows updates since the upgrade the machine is still sat on step 3 of 3 on the start up configuring updates screen after about an hour, you can't turn this machine off without using a paper clip to reset it and as I have just found you need a paper clip :). Installing Windows 7 SP1 was effortless. One of the first things I did on it was to reduce the size of the font, by default its set at 125%, my eye sight is ok :) so I've set that back down. Amazon Kindle for the PC works really well, plenty of text on the screen when viewed portrait, the case it comes with also allows the slate to stand up in various positions - portrait, horizontal - seems stable enough. The wireless works well, seems to have a better signal than my other two laptop machines which is good news. The gadget passed the pose test at work :). I use offline files to keep a copy of all my work stuff locally, I'm not sure what it is, well, its probably my server but whenever I try and sync it runs for a couple of minutes then fails with network name no longer contactable, funnily enough its fine from my big laptop so I can only guess this may be a driver type issue on the EP121 itself - very odd and very annoying. I do a lot of presenting and need to plug into a VGA project because most sites that's all that is offered, the EP121 has a mini-hdmi output which is great except for this scenario, hdmi is digital, vga is analogue, you will struggle to find a cost effective solution, I found HDFury and also a device HP do, however, a better solution appears to be getting a USB graphics adapter for instance the one I've ordered is the ClimaxDigital USB 2.0 to DVI,VGA or HDMI Adaptor which gives everything I need - VGA and DVI output and great resolution as well - ok, so fingers crossed because I'm presenting next Wednesday in Edinburgh and not taking my 300kg lenovo w700 (I'm sure my back just sighed in relief) - it certainly works really well on my LED TV, the install was simple - it just works! One of the several reasons for buying this piece of kit was to use it on my LED TV to remote into my main machine to check stuff whilst sat in my living room, also to watch webcasts and lecture videos in comfort away from my office, because of the wireless speed and limitation I'm opting for a USB network adapter from Belkin - that will also allow me to take advantage of my home gigabit network, there are only 2 usb ports on the slate so I'm going to knock up a hub so connecting it in is straight forward and simple, I'm also going to purchase a second power supply so I don't have to faff about with that either.I now have the developer x64 edition of SQL Server 2008 R2, yes everything :) - about 16GB left to play with on the machine now but that will be fine, I'll put AdventureWorks on there so I can play and demo stuff which is all I'm after from this, my development machine is significantly more powerful and meets my storage needs too.Travel test this weekend and next week, I'm in Dundee for my final exam for the masters degree.

    Read the article

  • Can't connect to VPN on Ubuntu 12.04

    - by 12rad
    I'm having a lot of trouble connecting to VPN. This used to work on my machine, but i recently did an update and it's stopped working. I'm not sure what the problem is. My question is how do i debug this? I'm not able to narrow it down to a specific problem. This is what i get when i tail the syslogs. Would appreciate any help! Nov 6 23:42:52 meera NetworkManager[1137]: <info> Starting VPN service 'pptp'... Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 6132 Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN service 'pptp' appeared; activating connections Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN plugin state changed: starting (3) Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN connection 'NAME VPN' (Connect) reply received. Nov 6 23:42:52 meera pppd[6136]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. Nov 6 23:42:52 meera pppd[6136]: pppd 2.4.5 started by root, uid 0 Nov 6 23:42:52 meera chat[6139]: timeout set to 15 seconds Nov 6 23:42:52 meera chat[6139]: abort on (NO CARRIER) Nov 6 23:42:52 meera chat[6139]: abort on (NO DIALTONE) Nov 6 23:42:52 meera chat[6139]: abort on (ERROR) Nov 6 23:42:52 meera chat[6139]: abort on (NO ANSWER) Nov 6 23:42:52 meera chat[6139]: abort on (BUSY) Nov 6 23:42:52 meera chat[6139]: abort on (Username/Password Incorrect) Nov 6 23:42:52 meera chat[6139]: send (AT^M) Nov 6 23:42:52 meera pptp[6138]: nm-pptp-service-6132 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Nov 6 23:42:52 meera chat[6139]: expect (OK) Nov 6 23:42:52 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Nov 6 23:42:54 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Nov 6 23:42:54 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 13077). Nov 6 23:42:54 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:231]: The ppp mode is synchronous, yet no pptp --sync option is specified! Nov 6 23:43:07 meera chat[6139]: alarm Nov 6 23:43:07 meera chat[6139]: Failed Nov 6 23:43:07 meera pppd[6136]: Script chat -v -f /etc/ppp/chat-ztisp finished (pid 6139), status = 0x3 Nov 6 23:43:07 meera pppd[6136]: Connect script failed Nov 6 23:43:07 meera pppd[6136]: Waiting for 1 child processes... Nov 6 23:43:07 meera pppd[6136]: script /usr/sbin/pptp 204.197.218.90 --nolaunchpppd --loglevel 0 --logstring nm-pptp-service-6132, pid 6138 Nov 6 23:43:07 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error Nov 6 23:43:07 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:216]: pppd may have shutdown, see pppd log Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[callmgr_main:pptp_callmgr.c:234]: Closing connection (unhandled) Nov 6 23:43:07 meera pppd[6136]: Script /usr/sbin/pptp 204.197.218.90 --nolaunchpppd --loglevel 0 --logstring nm-pptp-service-6132 finished (pid 6138), status = 0x0 Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Nov 6 23:43:07 meera pppd[6136]: Exit. Nov 6 23:43:07 meera NetworkManager[1137]: <warn> VPN plugin failed: 1 Nov 6 23:43:07 meera NetworkManager[1137]: <info> VPN plugin state changed: stopped (6) Nov 6 23:43:07 meera NetworkManager[1137]: <info> VPN plugin state change reason: 0 Nov 6 23:43:07 meera NetworkManager[1137]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active.

    Read the article

  • Use your iPhone or iPod Touch as a Boxee Remote

    - by DigitalGeekery
    Are you a Boxee user looking for a remote control solution? Well, you might not need to look any further than your pocket. The free Boxee Remote App turns your iPhone or iPod Touch into a a simple and easy-to-use Boxee remote. The Boxee Remote App works over WiFi, so there is no need for to buy or install additional hardware on your PC. Plus, you don’t even need to be within the line of sight for it to work. Using the Boxee Remote App Download the free Boxee Remote App from the App Store and install it on your iPhone or iPod Touch. See download link below. Next, make sure you have Boxee running on your PC. Select the Boxee icon to open the App.   The first time you log in you’ll be greeted by an introduction screen that will explain the two modes. Click Continue. When opened in “Button” mode, you’ll be presented with 4 directional buttons, an “OK” button, and a back arrow button that works like the Esc key does in Boxee. Button mode performs just as a normal remote. Touching the directional buttons moves your on screen selection right, left, up, and down. Tap the OK button to open or select an item. To enter “Gesture” mode, tap the Gesture button along the top of the Screen. Gesture mode works similar to a touch pad or trackball on a laptop. You drag the Boxee icon with your thumb or finger across the screen to move around within Boxee. The icon will turn red while being dragged or touched. Simply tap the icon to select.   The Settings button allows you to manually add or delete a host computer, or adjust the sensitivity of the controls.     If you need to enter text, such as enter logon credentials for an App, the on screen keyboard will pop up. While watching a video you’ll have on-screen Stop and Pause buttons along with a volume slider.   The Boxee Remote App is simple and easy to use. As long as you can connect via WiFi, you can use it to control any instance of Boxee running on any computer on your network. Download the Boxee Remote App Similar Articles Productive Geek Tips Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Getting Started with BoxeeIntegrate Boxee with Media Center in Windows 7Watch Netflix Instant Movies in BoxeeWin a Free iPod Touch in the How-To Geek Facebook Giveaway! TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Is Cloud Security Holding Back Social SaaS?

    - by Mike Stiles
    The true promise of social data co-mingling with enterprise data to influence and inform social marketing (all marketing really) lives in cloud computing. The cloud brings processing power, services, speed and cost savings the likes of which few organizations could ever put into action on their own. So why wouldn’t anyone jump into SaaS (Software as a Service) with both feet? Cloud security. Being concerned about security is proper and healthy. That just means you’re a responsible operator. Whether it’s protecting your customers’ data or trying to stay off the radar of regulatory agencies, you have plenty of reasons to make sure you’re as protected from hacking, theft and loss as you can possibly be. But you also have plenty of reasons to not let security concerns freeze you in your tracks, preventing you from innovating, moving the socially-enabled enterprise forward, and keeping up with competitors who may not be as skittish regarding SaaS technology adoption. Over half of organizations are transferring sensitive or confidential data to the cloud, an increase of 10% over last year. With the roles and responsibilities of CMO’s, CIO’s and other C’s changing, the first thing you should probably determine is who should take point on analyzing cloud software options, providers, and policies. An oft-quoted Ponemon Institute study found 36% of businesses don’t have a cloud security policy at all. So that’s as good a place to start as any. What applications and data are you comfortable housing in the cloud? Do you have a classification system for data that clearly spells out where data types can go and how they can be used? Who, both internally and at the cloud provider, will function as admins? What are the different levels of admin clearance? Will your security policies and procedures sync up with those of your cloud provider? The key is verifiable trust. Trust in cloud security is actually going up. 1/3 of organizations polled say it’s the cloud provider who should be responsible for data protection. And when you look specifically at SaaS providers, that expectation goes up to 60%. 57% “strongly agree” or “agree” there’s more confidence in cloud providers’ ability to protect data. In fact, some businesses bypass the “verifiable” part of verifiable trust. Just over half have no idea what their cloud provider does to protect data. And yet, according to the “Private Cloud Vision vs. Reality” InformationWeek Report, 82% of organizations say security/data privacy are one of the main reasons they’re still holding the public cloud at arm’s length. That’s going to be a tough position to maintain, because just as social is rapidly changing the face of marketing, big data is rapidly changing the face of enterprise IT. Netflix, who’s particularly big on the benefits of the cloud, says, "We're systematically disassembling the corporate IT components." An enterprise can never realize the full power of big data, nor get the full potential value out of it, if it’s unwilling to enable the integrations and dataset connections necessary in the cloud. Because integration is called for to reduce fragmentation, a standardized platform makes a lot of sense. With multiple components crafted to work together, you’re maximizing scalability, optimization, cost effectiveness, and yes security and identity management benefits. You can see how the incentive is there for cloud companies to develop and add ever-improving security features, making cloud computing an eventual far safer bet than traditional IT. @mikestilesPhoto: stock.xchng

    Read the article

  • Introducing the First Global Web Experience Management Content Management System

    - by kellsey.ruppel
    By Calvin Scharffs, VP of Marketing and Product Development, Lingotek Globalizing online content is more important than ever. The total spending power of online consumers around the world is nearly $50 trillion, a recent Common Sense Advisory report found. Three years ago, enterprises would have to translate content into 37 language to reach 98 percent of Internet users. This year, it takes 48 languages to reach the same amount of users.  For companies seeking to increase global market share, “translate frequently and fast” is the name of the game. Today’s content is dynamic and ever-changing, covering the gamut from social media sites to company forums to press releases. With high-quality translation and localization, enterprises can tailor content to consumers around the world.  Speed and Efficiency in Translation When it comes to the “frequently and fast” part of the equation, enterprises run into problems. Professional service providers provide translated content in files, which company workers then have to manually insert into their CMS. When companies update or edit source documents, they have to hunt down all the translated content and change each document individually.  Lingotek and Oracle have solved the problem by making the Lingotek Collaborative Translation Platform fully integrated and interoperable with Oracle WebCenter Sites Web Experience Management. Lingotek combines best-in-class machine translation solutions, real-time community/crowd translation and professional translation to enable companies to publish globalized content in an efficient and cost-effective manner. WebCenter Sites Web Experience Management simplifies the creation and management of different types of content across multiple channels, including social media.  Globalization Without Interrupting the Workflow The combination of the Lingotek platform with WebCenter Sites ensures that process of authoring, publishing, targeting, optimizing and personalizing global Web content is automated, saving companies the time and effort of manually entering content. Users can seamlessly integrate translation into their WebCenter Sites workflows, optimizing their translation and localization across web, social and mobile channels in multiple languages. The original structure and formatting of all translated content is maintained, saving workers the time and effort involved with inserting the text translation and reformatting.  In addition, Lingotek’s continuous publication model addresses the dynamic nature of content, automatically updating the status of translated documents within the WebCenter Sites Workflow whenever users edit or update source documents. This enables users to sync translations in real time. The translation, localization, updating and publishing of Web Experience Management content happens in a single, uninterrupted workflow.  The net result of Lingotek Inside for Oracle WebCenter Sites Web Experience Management is a system that more than meets the need for frequent and fast global translation. Workflows are accelerated. The globalization of content becomes faster and more streamlined. Enterprises save time, cost and effort in translation project management, and can address the needs of each of their global markets in a timely and cost-effective manner.  About Lingotek Lingotek is an Oracle Gold Partner and is going to be one of the first Oracle Validated Integrator (OVI) partners with WebCenter Sites. Lingotek is also an OVI partner with Oracle WebCenter Content.  Watch a video about how Lingotek Inside for Oracle WebCenter Sites works! Oracle WebCenter will be hosting a webinar, “Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter," tomorrow, September 13th. To attend the webinar, please register now! For more information about Lingotek for Oracle WebCenter, please visit http://www.lingotek.com/oracle.

    Read the article

  • Graphical driver 13.10 ATI RV630

    - by Michael Cephalus
    I started updating the distro from 13.04 to 13.10. Then I got my hands on a Radeon HD 2600. I installed the RV630 compatible Catalystdriver from the official webpage. Then xserver crashed everytime I opened a browser or vlc fx. I took notice that there was no driver listed in configuration underneath. michael@statubtunu:~$ lshw -c video WARNING: you should run this program as super-user. *-display UNCLAIMED description: VGA compatible controller product: RV630 PRO [Radeon HD 2600 PRO] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list configuration: latency=0 resources: memory:d0000000-dfffffff memory:e0500000-e050ffff ioport:1000(size=256) memory:e0000000-e001ffff i installed additional drivers from jockey and the ubuntu softwarecenter ati-driver. though that only made it to crash xserver completely and when i type: michael@statubtunu:~$ sudo startx X.Org X Server 1.14.3 Release Date: 2013-09-12 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu Current Operating System: Linux statubtunu 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-13-generic root=UUID=8fb2e395-0ea2-4f45-ac66-225696b7ce2c ro quiet splash vt.handoff=7 Build Date: 15 October 2013 09:23:29AM xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 12 18:50:02 2013 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Initializing built-in extension Generic Event Extension Initializing built-in extension SHAPE Initializing built-in extension MIT-SHM Initializing built-in extension XInputExtension Initializing built-in extension XTEST Initializing built-in extension BIG-REQUESTS Initializing built-in extension SYNC Initializing built-in extension XKEYBOARD Initializing built-in extension XC-MISC Initializing built-in extension SECURITY Initializing built-in extension XINERAMA Initializing built-in extension XFIXES Initializing built-in extension RENDER Initializing built-in extension RANDR Initializing built-in extension COMPOSITE Initializing built-in extension DAMAGE Initializing built-in extension MIT-SCREEN-SAVER Initializing built-in extension DOUBLE-BUFFER Initializing built-in extension RECORD Initializing built-in extension DPMS Initializing built-in extension X-Resource Initializing built-in extension XVideo Initializing built-in extension XVideo-MotionCompensation Initializing built-in extension SELinux Initializing built-in extension XFree86-VidModeExtension Initializing built-in extension XFree86-DGA Initializing built-in extension XFree86-DRI Initializing built-in extension DRI2 Loading extension GLX ERROR: could not insert 'fglrx': No such device (II) [KMS] drm report modesetting isn't supported. (EE) (EE) Backtrace: (EE) 0: /usr/bin/X (xorg_backtrace+0x49) [0xb77780b9] (EE) 1: /usr/bin/X (0xb75d8000+0x1a3e24) [0xb777be24] (EE) 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb75b540c] (EE) 3: /usr/bin/X (xf86findOption+0x2a) [0xb7681daa] (EE) 4: /usr/bin/X (xf86findOptionValue+0x23) [0xb7681f43] (EE) 5: /usr/bin/X (0xb75d8000+0x7ebfd) [0xb7656bfd] (EE) 6: /usr/bin/X (xf86ProcessOptions+0x37) [0xb7657507] (EE) 7: /usr/lib/xorg/modules/libvbe.so (vbeDoEDID+0xe7) [0xb5eb8647] (EE) 8: /usr/lib/xorg/modules/drivers/vesa_drv.so (0xb5ee7000+0x287c) [0xb5ee987c] (EE) 9: /usr/bin/X (InitOutput+0xb23) [0xb7659c33] (EE) 10: /usr/bin/X (0xb75d8000+0x2a30b) [0xb760230b] (EE) 11: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0xb71ba905] (EE) 12: /usr/bin/X (0xb75d8000+0x2a908) [0xb7602908] (EE) (EE) Segmentation fault at address 0x5 (EE) Fatal server error: (EE) Caught signal 11 (Segmentation fault). Server aborting (EE) (EE) Please consult the The X.Org Foundation support at for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. This is what comes, but no GUI. Is there any way to deal with this?

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • iOS 5 New Features vs Android

    - by kerry
    Browsing through the iOS 5 features list, I can’t help but notice a lot of it is catch up. Having owned both an iPhone and an Android for a considerable amount of time, I figured I would jot down my opinions. Notification Center – Completely ripped off from Android but looks good and is a much needed addition iMessage – This is very interesting as most people who would think it’s cool, probably really wouldn’t understand the significance.  Basically, Apple is adding an IM application to iOS.  Now iPhone / iPad users can sit around messaging each other how cool it is like Crackberry users circa 2003.  I guess the only real improvement over MMS is that you can easily setup groups, see when each other are typing, and don’t incur text messaging charges; at the expense of leaving your non-iOS buddies out (who wants to talk to those losers anyways?). Newstand – An app update and not an OS one (Apple typically doesn’t make distinctions).  It all seems like stuff my current Nook stuff will do.  Note: I did look to compare prices but it seems that information is not available without downloading iTunes.  lame. Reminders – TODO lists are ho hum, but the ability to have reminders when you arrive or leave a position is pretty cool. Twitter integration – The fact that the best Apple can come up with is ‘one at a timing’ online service integration is laughable at best. Camera – Can control it from the lock screen.  Now you’ll have tons of pocket lint photos in your iCloud to go along with the wicked shot of that cheetah that just unexpectedly ran by your apartment. Photos – Speaking of iCloud, all of your devices photos will be synced through it.  That’s cool I guess, not sure if Android will do the same. Safari – What?  You haven’t been reading rss feeds on your device this whole time?  Something tells me you aren’t about to start. PC Free – Finely Apple untethers the iPhone.  What took them so long? Game Center – This should be an interesting service.  Attention Apple fanboys immediately forget how they are blatantly copying Microsoft achievements (at least rename them). Wifi Sync – Just couldn’t cut the cord completely could they?  For what it’s worth, the Zune has been doing this for 5 years now. All in all a pretty big update.  Mostly iCloud.  Mostly keeping up the mobile status quo.  As an Android user, I can’t say there is anything I am envious of.

    Read the article

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • Running PowerShell from MSdeploy runcommand does not exit

    - by Peter Moberg
    Im am trying to get MSDeploy to execute a PowerShell script on a remote server. This is how i execute MSDeploy: msdeploy \ -verb:sync \ -source:runCommand='C:\temp\HelloWorld.bat', \ waitInterval=15000,waitAttempts=1 \ -dest:auto,computername=$WebDeployService$Credentials -verbose HelloWorld.bat contains: echo "Hello world!" powershell.exe C:\temp\WebDeploy\Package\HelloWorld.ps1 echo "Done" The HelloWorld.ps1 only contains: Write-Host "Hello world from PowerShell!" However, it seems like PowerShell never terminates. This is the output from running the msdeploy: Verbose: Performing synchronization pass #1. Verbose: Source runCommand (C:\temp\HelloWorld.bat) does not match destination (C:\temp\HelloWorld.bat) differing in attributes (isSource['True','False']). Update pending. Info: Updating runCommand (C:\temp\HelloWorld.bat). Info: Info: C:\temp>echo "Hello world!" "Hello world!" C:\temp\WebDeploy>powershell.exe C:\temp\HelloWorld.ps1 Info: Hello world from Powershell! Info: Warning: The process 'C:\Windows\system32\cmd.exe' (command line '/c "C:\Users\peter\AppData\Local\Temp\gaskgh55.b2q.bat "') is still running. Waiting for 15000 ms (attempt 1 of 1). Error: The process 'C:\Windows\system32\cmd.exe' (command line '/c "C:\Users\peter\AppData\Local\Temp\gaskgh55.b2q.bat"' ) was terminated because it exceeded the wait time. Error count: 1. Anyone knows a solution?

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • How to build android cts? And how to add and run your test case?

    - by Leox
    From 2.0 the cts is freely downloadable from android's repository. But there is no documents about it. Does anyone can tell me: how to build cts? Is there a standard procedure? How to run cts? How to add customized test case? Here, share my experience. After repo sync all source, you can't directly run "make" to build all source. You will get some errors. Now, I'am trying to first build android source without cts, and then build cts alone. Also, here are some reference for run cts: http://i-miss-erin.blogspot.com/2010/05/how-to-add-test-plan-package-to-android.html www.mentby.com/chenny/how-does-cts-work-where-can-i-get-the-test-streams.html www.jxva.com/?act=blog!article&articleId=157 1st time Update @ 5-13 18:39 +8:00 I do the following steps: 1.build android source without cts (move cts out of the $SDK_ROOT). 2.build cts (move cts back). both jdk1.5 and 1.6 have the following errors: 1.The 1st time "make cts" report: "Caused by: java.io.FileNotFoundException: ...(Too many open files)" 2.The 2nd time "make cts" report: "acp: file 'out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/tests/data' does not exist" 3.The 3rd time "make cts" report: "/bin/bash: line 0: cd: out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/hostjunit_files/classes: No such file or directory" 4.The last time "make cts" report: "zip error: Nothing to do! (try: zip -q -r ../../android.core.vm-tests.jar . -i .)"

    Read the article

  • Help with SqlCeChangeTracking

    - by MusiGenesis
    I'm trying to use a new class in SqlCe 3.5 SP2 called SqlCeChangeTracking. This class (allegedly) lets you turn on change tracking on a table, without using RDA replication or Sync Services. Assuming you have an open SqlCeConnection, you enable change tracking on a table like this: SqlCeChangeTracking tracker = new SqlCeChangeTracking(conn); tracker.EnableTracking(TableName, TrackingKeyType.PrimaryKey, TrackingOptions.All); This appears to work, sort of. When I open the SDF file and view it in SQL Server Management Studio, the table has three additional fields: __sysChangeTxBsn, __sysInsertTxBsn and __sysTrackingContext. According to the sparse documentation, these columns (along with the __sysOCSDeletedRows system table) are used to track changes. The problem is that these three columns always contain NULL values for all rows, no matter what I do. I can add, delete, edit etc. and those columns remain NULL no matter what (and no deleted records ever show up in __sysOCSDeletedRows). I have found virtually no documentation on this class at all, and the promised MSDN API appears non-existent. Anybody know how to use this class?

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • Opening html modal dialog

    - by Hulk
    All, I need to get the contents below on a modal window.I had been trying this for a while today. On modal window opened the background body contents are not accessible.and on reload page the modal popup should close down leavinfg the contents of the text area as it is. <a href="" id="link">Open Popup</a> <table><tr width="10%"><td> <TEXTAREA name="text1" id="text1" rows="15" cols="65" onscroll="sync();" spellcheck="false"></TEXTAREA> </td> <td> <TEXTAREA Name="text2" id="text2" rows="15" cols="65" spellcheck="false"> </TEXTAREA> </td></tr> <tr> <tr width="90%"><td> </td> </tr> </table> <input type="button" value="Compile" onclick="stat('')"/> <input type="button" value="Reload Page" onClick="window.location.href=window.location.href"> Thanks..

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >