Search Results

Search found 39866 results on 1595 pages for '2012 end of the world'.

Page 209/1595 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Unifier 10.0 ????!

    - by hhata
    2012??Skire???????Primavera???????Unifier???????????????????????????? ???????????????R10.0???????????????????????????Unifier????????????????????????????????????????????????????????????? ???????10.0???????????? ??????????? ??????????????? ??????????????? Unifier Mobile??? Bid???????OIM(Oracle Integration Manager)????  ??????????? ???????????????????????????????? Internet Explorer 9.x, 10.x and 11.x Mozilla Firefox 24.x (ESR) Google Chrome 30.x Safari (Mac only) 5.1.7+  ???????OS???????????????????????????????  MS Windows 2012, IIS8+ Solaris 11 Windows 8 Oracle DB 12c Oracle Weblogic 12c Mac OS X 10.9 SGC 5.0 and iPad2+ Weblogic as Proxy RAC, Oracle Dataguard and Hardening features ??????????????? ??????? ????????????? HTML5??? ??????????????? ???????????????????? ????CBS (Cost Breakdown Structure)???????????????????????????? Unifier Mobile??? ??????(??iPhone??)????????????Unifier???????? Bid???????OIM(Oracle Integration Manager)???? Bid????????????Unifier????????OIM?????????????????????????????????

    Read the article

  • ??????????????!?Oracle Solaris 11.1 Day? ??????????

    - by OTN-J Master
    ????????????Oracle Solaris??20???????????? ?????????????????Oracle Solaris 11.1????????11????300???????????????????????????OS???????Oracle Database????????????????OS????????????????>> ???????? ?Oracle Solaris 11:??????OS? ???????11?30?(?)????????????Oracle Solaris 11.1 Day???????????????????????????????????????????????????????????????????????!???Solaris???????????????????????????????????Web??????????????????????????????????????????????????????Solaris 11??????????????????????????????????????????????????????????????????>>??????!????: 11?30?(?)13:00~15:30 (????12:30)?????: ???????? 13F???????(??????)???????  “The First Cloud OS” ???????Oracle Solaris 11??????????????????????????????????????????? IT ????????????????????????????????????????????????IT???????OS????????????????????Oracle OpenWorld 2012?????????Solaris11?????????????????????Solaris 11.1 ???????????????????Solaris????????????????????OS??Oracle?????????????????????????????????????Oracle Solaris????????????????????? ?????? ?13:30~13:40? ???????????????????????!???????????????????OS????????????????????? ???????????????? ?? ?? ? ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT????????????????????????????????????????????????????????????????????????????????????IT???????Enterprise??????????????????????? ?13:40~14:20?[??????????????]Oracle Solaris11.1 ?????????????????? ????????????? ??? ?“The First Cloud OS” ???????Oracle Solaris 11??????????????????????????????????????????? IT ??????????????????????????????????????????????????Oracle OpenWorld 2012?????????Solaris11?????????????????????Solaris 11.1 ???????????????? ?14:30~15:00?[?????????????????] ??????!Solaris 11 Beta ????????????????????   ???????   ? ?? ? ????????Oralce Solaris 11 Beta ?????????????????????????????????????????????15:00~15:20?[????????????????]Solaris 11 ?????????????????????? ??????????? ???????  ?? ?? Oracle University ?????????????????????????????????????????????????(Classroom Training)?????????????????????????????????????????????????????????Oracle University?????Solaris 11????????????????????? >> ??????????

    Read the article

  • Enable Php Fastcgi and Get 500 Internal Server Error (Lighttpd)

    - by skycrew
    anyone can help me? I just got this problem today. Before this my site running smooth with Fastcgi enable but now its show 500 internal server error with below logs. I need to disable php fastcgi in LxAdmin so that my visitor can access my site but when I disable php fastgi, my web performance is very slow with high load to server. I also include the performance screenshot. What should I do? This are the error log I got: 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24055 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 21622 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3342 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3342 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 22325 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 852 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24032 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 20402 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3336 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3336 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 855 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.1231) cgi died ? 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/var/tmp/lighttpd/php.socket.lyrics-hub.com.3333-1 2010-06-16 21:59:53: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-06-16 21:59:56: (server.c.1470) server stopped by UID = 0 PID = 24439 2010-06-16 22:00:23: (log.c.75) server started Performance Graph as below:- http://img404.imageshack.us/img404/3498/memorylxadmin.jpg

    Read the article

  • Enable Php Fastcgi and Get 500 Internal Server Error (Lighttpd)

    - by skycrew
    Hello everyone, anyone can help me? I just got this problem today. Before this my site running smooth with Fastcgi enable but now its show 500 internal server error with below logs. I need to disable php fastcgi in LxAdmin so that my visitor can access my site but when I disable php fastgi, my web performance is very slow with high load to server. I also include the performance screenshot. What should I do? This are the error log I got: 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24055 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 21622 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3342 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3342 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 22325 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 852 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24032 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 20402 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3336 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3336 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 855 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.1231) cgi died ? 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/var/tmp/lighttpd/php.socket.lyrics-hub.com.3333-1 2010-06-16 21:59:53: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-06-16 21:59:56: (server.c.1470) server stopped by UID = 0 PID = 24439 2010-06-16 22:00:23: (log.c.75) server started Performance Graph as below:- http://img404.imageshack.us/img404/3498/memorylxadmin.jpg

    Read the article

  • Moving data files failing

    - by Miles Hayler
    Trying to migrate data from C: to D: via the SBS console is failing. The wizard starts running but drops out in the first few seconds. I'll post the full logs, but the important lines appear to be as follows: An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task --- ErrorCode:0 I've been googling for days with no luck. I have found that mscorlib is a component of .net, and I've discovered multiple instances of the file in %windir%, %windir%\winsxs, %windir%\Microsoft.net Anyone come across and fixed this one before? --------------------------------------------------------- [1516] 110315.190856.1105: Storage: Initializing...C:\Program Files\Windows Small Business Server\Bin\MoveData.exe [1516] 110315.190856.2875: Storage: Data Store to be moved: Exchange [1516] 110315.190856.5305: TaskScheduler: Exception System.IO.FileNotFoundException: [1516] 110315.190856.5605: Exception: --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5625: Storage: Exception Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: [1516] 110315.190856.5635: Exception: --------------------------------------- [b]An exception of type 'Type: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException, Common, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' has occurred.[/b] Timestamp: 03/15/2011 19:08:56 Message: Failed to find the task path Stack: at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5665: Storage: Error Retrieving Server Backup Task Status: ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: Failed to find the task path ---> System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() at Microsoft.WindowsServerSolutions.Storage.MoveData.Helper.get_ServerBackupTaskState() [1516] 110315.190857.6216: Storage: Backup Task State: Unknown [1516] 110315.190857.9347: Storage: Launching the Move Data Wizard! [1516] 110315.190857.9397: Wizard: Admin:QueryNextPage(null) = Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190857.9417: Wizard: TOC Storage.MoveDataWizard.GettingStartedPage is on ExpectedPath [1516] 110315.190857.9577: Wizard: Storage.MoveDataWizard.GettingStartedPage entered [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.DiagnoseDataStorePage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.NewDataStoreLocationPage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190857.9697: Wizard: ---------------------------------- [1516] 110315.190857.9697: Wizard: The pages visted: [1516] 110315.190857.9697: Wizard: Current Page := [TOC Storage.MoveDataWizard.GettingStartedPage] [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9697: Wizard: Step 1 of 3 [1516] 110315.190907.0406: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0416: Wizard: Storage.MoveDataWizard.GettingStartedPage exited with the button: Next [1516] 110315.190907.0416: WizardChainEngine Next Clicked: Going to page {0}.: Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0496: Wizard: Storage.MoveDataWizard.DiagnoseDataStorePage entered [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190907.0606: Wizard: ---------------------------------- [1516] 110315.190907.0606: Wizard: The pages visted: [1516] 110315.190907.0606: Wizard: [TOC] visited: TOC Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190907.0606: Wizard: Current Page := [TOC Storage.MoveDataWizard.DiagnoseDataStorePage] [1516] 110315.190907.0616: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0616: Wizard: Step 2 of 3 [19772] 110315.190907.0656: Storage: Starting System Diagnosis [19772] 110315.190907.0656: Storage: Getting Data Store Information [19772] 110315.190907.1086: Storage: Create the list of storage and DB directory path [19772] 110315.190907.1246: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks..ctor [19772] 110315.190907.1546: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190907.1596: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190907.1606: Messaging: Exchange install path: C:\Program Files\Microsoft\Exchange Server\bin [19772] 110315.190908.4157: Messaging: E12 Monad runspace created ID: Microsoft.PowerShell [19772] 110315.190908.4237: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190908.4287: Messaging: Executed management shell command: get-exchangeserver [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.5719: Messaging: Executed management shell command: get-user -Identity "dmagroup.local\Administrator" [19772] 110315.190911.0870: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Executed management shell command: get-mailbox -Identity "d2ae2bf0-48a7-4ce9-9e72-bb3c765454ac" [19772] 110315.190911.1300: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1310: Messaging: User Administrator is mail enabled and can use MessagingManagement to send mail. [19772] 110315.190911.1310: Messaging: Email address used for user: [email protected] [19772] 110315.190911.1440: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1440: Messaging: Executed management shell command: get-group -Identity "Domain Admins" [19772] 110315.190911.1630: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1640: Messaging: User Administrator is a member of Domain Admins and can use MessagingManagement to manage Exchange. [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for Exchange management: True [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for mail submission: True [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Tasks.TaskMoveExchangeData.CreateDataStoreDriveList [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1670: Messaging: Executed management shell command: get-storagegroup -Server "SERVER01" [19772] 110315.190911.2990: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Executed management shell command: get-mailboxdatabase -Server "SERVER01" [19772] 110315.190911.4440: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Executed management shell command: get-publicfolderdatabase -Server "SERVER01" [19772] 110315.190911.5240: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.5510: Storage: Data Store Drive/s Details:Name=C:\,Size=12675712420 [19772] 110315.190911.5510: Storage: Data Store Size Details: Current Total Size=12675712420 Required Size=12675712420 [19772] 110315.190911.5510: Storage: MoveData Task can move the Data Store=True [19772] 110315.190911.8401: Storage: An error was encountered when performing system diagnosis : ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) [1516] 110315.190912.0331: Storage: An error occured during the execution: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: Diagnosing the Data Store failed (see the inner exception) ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.backgroundWorker_RunWorkerCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardFrameView.Create() at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.LaunchMoveDataWizard() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.Main(String[] args)

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • Need some help on how to replay the last game of a java maze game

    - by Marty
    Hello, I am working on creating a Java maze game for a project. The maze is displayed on the console as standard output not in an applet. I have created most of hte code I need, however I am stuck at one problem and that is I need a user to be able to replay the last game i.e redraw the maze with the users moves but without any input from the user. I am not sure on what course of action to take, i was thinking about copying each users move or the position of each move into another array, as you can see i have 2 variables which hold the position of the player, plyrX and plyrY do you think copying these values into a new array after each move would solve my problem and how would i go about this? I have updated my code, apologies about the textIO.java class not being present, not sure how to resolve that exept post a link to TextIO.java [TextIO.java][1] My code below is updated with a new array of type char to hold values from the original maze (read in from text file and displayed using unicode characters) and also to new variables c_plyrX and c_plyrY which I am thinking should hold the values of plyrX and plyrY and copy them into the new array. When I try to call the replayGame(); method from the menu the maze loads for a second then the console exits so im not sure what I am doing wrong Thanks public class MazeGame { //unicode characters that will define the maze walls, //pathways, and in game characters. final static char WALL = '\u2588'; //wall final static char PATH = '\u2591'; //pathway final static char PLAYER = '\u25EF'; //player final static char ENTRANCE = 'E'; //entrance final static char EXIT = '\u2716'; //exit //declaring member variables which will hold the maze co-ordinates //X = rows, Y = columns static int entX = 0; //entrance X co-ordinate static int entY = 1; //entrance y co-ordinate static int plyrX = 0; static int plyrY = 1; static int exitX = 24; //exit X co-ordinate static int exitY = 37; //exit Y co-ordinate //static member variables which hold maze values //used so values can be accessed from different methods static int rows; //rows variable static int cols; //columns variable static char[][] maze; //defines 2 dimensional array to hold the maze //variables that hold player movement values static char dir; //direction static int spaces; //amount of spaces user can travel //variable to hold amount of moves the user has taken; static int movesTaken = 0; //new array to hold player moves for replaying game static char[][] mazeCopy; static int c_plyrX; static int c_plyrY; /** userMenu method for displaying the user menu which will provide various options for * the user to choose such as play a maze game, get instructions, etc. */ public static void userMenu(){ TextIO.putln("Maze Game"); TextIO.putln("*********"); TextIO.putln("Choose an option."); TextIO.putln(""); TextIO.putln("1. Play the Maze Game."); TextIO.putln("2. View Instructions."); TextIO.putln("3. Replay the last game."); TextIO.putln("4. Exit the Maze Game."); TextIO.putln(""); int option; //variable for holding users option TextIO.put("Type your choice: "); option = TextIO.getlnInt(); //gets users option //switch statement for processing menu options switch(option){ case 1: playMazeGame(); case 2: instructions(); case 3: if (c_plyrX == plyrX && c_plyrY == plyrY)replayGame(); else { TextIO.putln("Option not available yet, you need to play a game first."); TextIO.putln(); userMenu(); } case 4: System.exit(0); //exits the user out of the console default: TextIO.put("Option must be 1, 2, 3 or 4"); } } //end of userMenu /**main method, will call the userMenu and get the users choice and call * the relevant method to execute the users choice. */ public static void main(String[]args){ userMenu(); //calls the userMenu method } //end of main method /**instructions method, displays instructions on how to play * the game to the user/ */ public static void instructions(){ TextIO.putln("To beat the Maze Game you have to move your character"); TextIO.putln("through the maze and reach the exit in as few moves as possible."); TextIO.putln(""); TextIO.putln("Your characer is displayed as a " + PLAYER); TextIO.putln("The maze exit is displayed as a " + EXIT); TextIO.putln("Reach the exit and you have won escaped the maze."); TextIO.putln("To control your character type the direction you want to go"); TextIO.putln("and how many spaces you want to move"); TextIO.putln("for example 'D3' will move your character"); TextIO.putln("down 3 spaces."); TextIO.putln("Remember you can't walk through walls!"); boolean insOption; //boolean variable TextIO.putln(""); TextIO.put("Do you want to play the Maze Game now? (Y or N) "); insOption = TextIO.getlnBoolean(); if (insOption == true)playMazeGame(); else userMenu(); } //end of instructions method /**playMazeGame method, calls the loadMaze method and the charMove method * to start playing the Maze Game. */ public static void playMazeGame(){ loadMaze(); plyrMoves(); } //end of playMazeGame method /**loadMaze method, loads the 39x25 maze from the MazeGame.txt text file * and inserts values from the text file into the maze array and * displays the maze on screen using the unicode block characters. * plyrX and plyrY variables are set at their staring co ordinates so that when * a game is completed and the user selects to play a new game * the player character will always be at position 01. */ public static void loadMaze(){ plyrX = 0; plyrY = 1; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end of loadMaze method /**redrawMaze method, method for redrawing the maze after each move. * */ public static void redrawMaze(){ TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end redrawMaze method /**replay game method * */ public static void replayGame(){ c_plyrX = plyrX; c_plyrY = plyrY; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions mazeCopy = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ mazeCopy[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == c_plyrX && j == c_plyrY){ c_plyrX = i; c_plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (mazeCopy[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (mazeCopy[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (mazeCopy[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (mazeCopy[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end replayGame method /**plyrMoves method, method for moving the players character * around the maze. */ public static void plyrMoves(){ int nplyrX = plyrX; int nplyrY = plyrY; int pMoves; direction(); //UP if (dir == 'U' || dir == 'u'){ nplyrX = plyrX; nplyrY = plyrY; for(pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrX =plyrX + 1; } else { plyrX = plyrX-spaces; c_plyrX = plyrX; movesTaken++; } } }//end UP if //DOWN if (dir == 'D' || dir == 'd'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves ++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrX = plyrX+1; } else{ plyrX = plyrX+spaces; c_plyrX = plyrX; movesTaken++; } } } //end DOWN if //LEFT if (dir == 'L' || dir =='l'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrY = plyrY + 1; } else{ plyrY = plyrY-spaces; c_plyrY = plyrY; movesTaken++; } } } //end LEFT if //RIGHT if (dir == 'R' || dir == 'r'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrY += 1; } else{ plyrY = plyrY+spaces; c_plyrY = plyrY; movesTaken++; } } } //end RIGHT if //prints message if player escapes from the maze. if (maze[plyrX][plyrY] == '3'){ TextIO.putln("****Congratulations****"); TextIO.putln(); TextIO.putln("You have escaped from the maze."); TextIO.putln(); userMenu(); } else{ movesTaken++; redrawMaze(); plyrMoves(); } } //end of plyrMoves method /**direction, method * */ public static char direction(){ TextIO.putln("Enter the direction you wish to move in and the distance"); TextIO.putln("i.e D3 = move down 3 spaces"); TextIO.putln("U - Up, D - Down, L - Left, R - Right: "); dir = TextIO.getChar(); if (dir =='U' || dir == 'D' || dir == 'L' || dir == 'R' || dir == 'u' || dir == 'd' || dir == 'l' || dir == 'r'){ spacesMoved(); } else{ loadMaze(); TextIO.putln("Invalid direction!"); TextIO.put("Direction must be one of U, D, L or R"); direction(); } return dir; //returns the value of dir (direction) } //end direction method /**spaces method, gets the amount of spaces the user wants to move * */ public static int spacesMoved(){ TextIO.putln(" "); spaces = TextIO.getlnInt(); if (spaces <= 0){ loadMaze(); TextIO.put("Invalid amount of spaces, try again"); spacesMoved(); } return spaces; } //end spacesMoved method } //end of MazeGame class

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • JQuery Validation [migrated]

    - by user41354
    Im trying to get my form to validate...so basically its working, but a little bit too well, I have two text boxes, one is a start date, the other an end date in the format of mm/dd/yyyy if the start date is greater than the end date...there is an error if the end date is less than the start date...there is an error if the start date is less than today's date...there is an error The only thing is when I correct the error, the error warning is still there...here is my code: dates.change(function () { var testDate = $(this).val(); var otherDate = dates.not(this).val(); var now = new Date(); now.setHours(0, 0, 0, 0); // Pass Dates if (testDate != '' && new Date(testDate) < now) { addError($(this)); $('.flightDateError').text('* Dates cannot be earlier than today.'); isValid = false; return; } // Required Text if ($(this).hasClass("FromCal") && testDate == '') { addError($(this)); $('.flightDateError').text('* Required'); isValid = false; return; } // Validate Date if (!isValidDate(testDate)) { // $(this).addClass('validation_error_input'); addError($(this)); $('.flightDateError').text('* Invalid Date'); isValid = false; return; } else { // $(this).removeClass('validation_error_input'); removeError($(this)); if (!dates.not(this).hasClass('validation_error_input')) $('.flightDateError').text(' '); } // Validate Date Ranges if ($(this).val() != '' && dates.not(this).val != '') { if ($(this).hasClass("FromCal")) { if (new Date(testDate) > new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* Start date must be earlier than end date.'); isValid = false; return; } } else{ if (new Date(testDate) < new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* End date must be later than start date.'); return; } } } }); The main Issue is this part, I believe // Validate Date Ranges if ($(this).val() != '' && dates.not(this).val != '') { if ($(this).hasClass("FromCal")) { if (new Date(testDate) > new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* Start date must be earlier than end date.'); isValid = false; return; } } else{ if (new Date(testDate) < new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* End date must be later than start date.'); return; } } } testDate is the start date otherDate is the end date Thanks in advanced, J

    Read the article

  • call python with system() in R to run a python script emulating the python console

    - by Yihui
    I want to pass a chunk of Python code to Python in R with something like system('python ...'), and I'm wondering if there is an easy way to emulate the python console in this case. For example, suppose the code is "print 'hello world'", how can I get the output like this in R? >>> print 'hello world' hello world This only shows the output: > system("python -c 'print \"hello world\"'") hello world Thanks! BTW, I asked in r-help but have not got a response yet (if I do, I'll post the answer here).

    Read the article

  • Bundler isn't loading gems

    - by Garrett
    I have been having a problem with using Bundler and being able to access my gems without having to require them somewhere, as config.gem used to do that for me (as far as I know). In my Rails 3 app, I defined my Gemfile like so: clear_sources source "http://gemcutter.org" source "http://gems.github.com" bundle_path "vendor/bundler_gems" ## Bundle edge rails: git "git://github.com/rails/arel.git" git "git://github.com/rails/rack.git" gem "rails", :git => "git://github.com/rails/rails.git" ## Bundle gem "mongo_mapper", :git => "git://github.com/jnunemaker/mongomapper.git" gem "bluecloth", ">= 2.0.0" Then I run gem bundle, it bundles it all up like expected. Inside the environment.rb file that is included within boot.rb it looks like this: # DO NOT MODIFY THIS FILE module Bundler file = File.expand_path(__FILE__) dir = File.dirname(file) ENV["PATH"] = "#{dir}/../../../../bin:#{ENV["PATH"]}" ENV["RUBYOPT"] = "-r#{file} #{ENV["RUBYOPT"]}" $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/ext") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/lib") @gemfile = "#{dir}/../../../../Gemfile" require "rubygems" unless respond_to?(:gem) # 1.9 already has RubyGems loaded @bundled_specs = {} @bundled_specs["builder"] = eval(File.read("#{dir}/specifications/builder-2.1.2.gemspec")) @bundled_specs["builder"].loaded_from = "#{dir}/specifications/builder-2.1.2.gemspec" @bundled_specs["text-hyphen"] = eval(File.read("#{dir}/specifications/text-hyphen-1.0.0.gemspec")) @bundled_specs["text-hyphen"].loaded_from = "#{dir}/specifications/text-hyphen-1.0.0.gemspec" @bundled_specs["i18n"] = eval(File.read("#{dir}/specifications/i18n-0.3.3.gemspec")) @bundled_specs["i18n"].loaded_from = "#{dir}/specifications/i18n-0.3.3.gemspec" @bundled_specs["arel"] = eval(File.read("#{dir}/specifications/arel-0.2.pre.gemspec")) @bundled_specs["arel"].loaded_from = "#{dir}/specifications/arel-0.2.pre.gemspec" @bundled_specs["activemodel"] = eval(File.read("#{dir}/specifications/activemodel-3.0.pre.gemspec")) @bundled_specs["activemodel"].loaded_from = "#{dir}/specifications/activemodel-3.0.pre.gemspec" @bundled_specs["jnunemaker-validatable"] = eval(File.read("#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec")) @bundled_specs["jnunemaker-validatable"].loaded_from = "#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec" @bundled_specs["abstract"] = eval(File.read("#{dir}/specifications/abstract-1.0.0.gemspec")) @bundled_specs["abstract"].loaded_from = "#{dir}/specifications/abstract-1.0.0.gemspec" @bundled_specs["erubis"] = eval(File.read("#{dir}/specifications/erubis-2.6.5.gemspec")) @bundled_specs["erubis"].loaded_from = "#{dir}/specifications/erubis-2.6.5.gemspec" @bundled_specs["mime-types"] = eval(File.read("#{dir}/specifications/mime-types-1.16.gemspec")) @bundled_specs["mime-types"].loaded_from = "#{dir}/specifications/mime-types-1.16.gemspec" @bundled_specs["mail"] = eval(File.read("#{dir}/specifications/mail-2.1.2.gemspec")) @bundled_specs["mail"].loaded_from = "#{dir}/specifications/mail-2.1.2.gemspec" @bundled_specs["rake"] = eval(File.read("#{dir}/specifications/rake-0.8.7.gemspec")) @bundled_specs["rake"].loaded_from = "#{dir}/specifications/rake-0.8.7.gemspec" @bundled_specs["railties"] = eval(File.read("#{dir}/specifications/railties-3.0.pre.gemspec")) @bundled_specs["railties"].loaded_from = "#{dir}/specifications/railties-3.0.pre.gemspec" @bundled_specs["memcache-client"] = eval(File.read("#{dir}/specifications/memcache-client-1.7.7.gemspec")) @bundled_specs["memcache-client"].loaded_from = "#{dir}/specifications/memcache-client-1.7.7.gemspec" @bundled_specs["rack"] = eval(File.read("#{dir}/specifications/rack-1.1.0.gemspec")) @bundled_specs["rack"].loaded_from = "#{dir}/specifications/rack-1.1.0.gemspec" @bundled_specs["rack-test"] = eval(File.read("#{dir}/specifications/rack-test-0.5.3.gemspec")) @bundled_specs["rack-test"].loaded_from = "#{dir}/specifications/rack-test-0.5.3.gemspec" @bundled_specs["rack-mount"] = eval(File.read("#{dir}/specifications/rack-mount-0.4.5.gemspec")) @bundled_specs["rack-mount"].loaded_from = "#{dir}/specifications/rack-mount-0.4.5.gemspec" @bundled_specs["actionpack"] = eval(File.read("#{dir}/specifications/actionpack-3.0.pre.gemspec")) @bundled_specs["actionpack"].loaded_from = "#{dir}/specifications/actionpack-3.0.pre.gemspec" @bundled_specs["bluecloth"] = eval(File.read("#{dir}/specifications/bluecloth-2.0.7.gemspec")) @bundled_specs["bluecloth"].loaded_from = "#{dir}/specifications/bluecloth-2.0.7.gemspec" @bundled_specs["activerecord"] = eval(File.read("#{dir}/specifications/activerecord-3.0.pre.gemspec")) @bundled_specs["activerecord"].loaded_from = "#{dir}/specifications/activerecord-3.0.pre.gemspec" @bundled_specs["text-format"] = eval(File.read("#{dir}/specifications/text-format-1.0.0.gemspec")) @bundled_specs["text-format"].loaded_from = "#{dir}/specifications/text-format-1.0.0.gemspec" @bundled_specs["actionmailer"] = eval(File.read("#{dir}/specifications/actionmailer-3.0.pre.gemspec")) @bundled_specs["actionmailer"].loaded_from = "#{dir}/specifications/actionmailer-3.0.pre.gemspec" @bundled_specs["tzinfo"] = eval(File.read("#{dir}/specifications/tzinfo-0.3.16.gemspec")) @bundled_specs["tzinfo"].loaded_from = "#{dir}/specifications/tzinfo-0.3.16.gemspec" @bundled_specs["activesupport"] = eval(File.read("#{dir}/specifications/activesupport-3.0.pre.gemspec")) @bundled_specs["activesupport"].loaded_from = "#{dir}/specifications/activesupport-3.0.pre.gemspec" @bundled_specs["activeresource"] = eval(File.read("#{dir}/specifications/activeresource-3.0.pre.gemspec")) @bundled_specs["activeresource"].loaded_from = "#{dir}/specifications/activeresource-3.0.pre.gemspec" @bundled_specs["rails"] = eval(File.read("#{dir}/specifications/rails-3.0.pre.gemspec")) @bundled_specs["rails"].loaded_from = "#{dir}/specifications/rails-3.0.pre.gemspec" @bundled_specs["mongo"] = eval(File.read("#{dir}/specifications/mongo-0.18.2.gemspec")) @bundled_specs["mongo"].loaded_from = "#{dir}/specifications/mongo-0.18.2.gemspec" @bundled_specs["mongo_mapper"] = eval(File.read("#{dir}/specifications/mongo_mapper-0.6.10.gemspec")) @bundled_specs["mongo_mapper"].loaded_from = "#{dir}/specifications/mongo_mapper-0.6.10.gemspec" def self.add_specs_to_loaded_specs Gem.loaded_specs.merge! @bundled_specs end def self.add_specs_to_index @bundled_specs.each do |name, spec| Gem.source_index.add_spec spec end end add_specs_to_loaded_specs add_specs_to_index def self.require_env(env = nil) context = Class.new do def initialize(env) @env = env && env.to_s ; end def method_missing(*) ; yield if block_given? ; end def only(*env) old, @only = @only, _combine_only(env.flatten) yield @only = old end def except(*env) old, @except = @except, _combine_except(env.flatten) yield @except = old end def gem(name, *args) opt = args.last.is_a?(Hash) ? args.pop : {} only = _combine_only(opt[:only] || opt["only"]) except = _combine_except(opt[:except] || opt["except"]) files = opt[:require_as] || opt["require_as"] || name files = [files] unless files.respond_to?(:each) return unless !only || only.any? {|e| e == @env } return if except && except.any? {|e| e == @env } if files = opt[:require_as] || opt["require_as"] files = Array(files) files.each { |f| require f } else begin require name rescue LoadError # Do nothing end end yield if block_given? true end private def _combine_only(only) return @only unless only only = [only].flatten.compact.uniq.map { |o| o.to_s } only &= @only if @only only end def _combine_except(except) return @except unless except except = [except].flatten.compact.uniq.map { |o| o.to_s } except |= @except if @except except end end context.new(env && env.to_s).instance_eval(File.read(@gemfile), @gemfile, 1) end end module Gem @loaded_stacks = Hash.new { |h,k| h[k] = [] } def source_index.refresh! super Bundler.add_specs_to_index end end But when I try to access any of my gems, e.g. MongoMapper, Paperclip, Haml, etc. I get: NameError: uninitialized constant MongoMapper The same goes for any other gem. Does Bundler not include gems like the old Rails 2.0 did? Or is something messed up with my system? Any help would be appreciated, thank you!

    Read the article

  • Feedback on "market manipulation", a peripheral game mechanic for a satirical MMO

    - by BerndBrot
    This question asks for feedback on a specific game-mechanic. Since there is not one right feedback on a game mechanic, I tried to provide enough context and guidelines to still make it possible for users to rate answers and to accept an answer as the best answer (following these criteria from Writer.SE's meta website). Please comment if you have any suggestions on how I could improve the question in that regard. So, let's begin with the game itself and some of its elements which are relevant for this question. Context I'm working on a satirical, text-based multiplayer adventure and role-playing game set in modern-day London. The game resolves around the concept of sin and features a myriad of (venomous) allusions to all the things that go wrong in this world. Players can choose between character classes like bullshit artist (consultant), bankster, lawyer, mobster, celebrity, politician, etc. In order to complete the game, the player has to live so sinfully with regard to any of the seven deadly sins that a demon is willing to offer them a contract of sponsorship. On their quest to live a sinful live, characters explore more and more locations of modern-day London (on a GoogleMap), fight "monsters" like insurance sales agents or Jehovah's Witnesses, and complete quests, like building a PowerPoint presentation out of marketing buzz words or keeping up a number of substance abuse effects in order to progress on the gluttony path. Battles are turn based with both combatants having a deck of cards, with which they try to make their enemy give in to temptations of all sorts. Tempted enemies sometimes become contacts (an item drop mechanic), which can be exploited for various benefits, depending on their area of influence (finance, underworld, bureaucracy, etc.), level of influence, and kind of sway that the player has over them (bribed, seduced, threatened, etc.) Once a contract has been exploited, the player loses that contact. Most actions require turns. Turns are limited, but refill each day. Criteria A number of peripheral game mechanics are supposed to represent real world abuses and mischief in a humorous way integrate real world data and events to strengthen the feeling of relevance of the game's humor with regard to real world problems add fun ways of interacting with other players add ways for players to express themselves through game-play Market manipulation is one such peripheral game mechanic and should fulfill all of these goals. Market manipulation This is my initial design of the mechanic: Players can enter the London Stock Exchange (LSE) (without paying a turn) LSE displays the stock prices of a number of companies in industries like weapons or tobacco as well as some derivatives based on wheat and corn. The stock prices are calculated based on the actual stock prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value (without paying a turn). Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires 1 turn A contact in the financial sector (see above). The higher the level of influence of the contact, the stronger the effect of the manipulation on the stock price, and/or the shorter it takes for the manipulation to manifest itself. Market manipulation also adds a crime to the player's record. (There are a multitude of ways to take care of that, but it is still another "cost" of market manipulations.) The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Whenever food prices reach a certain level, in-game stories are posted about hunger catastrophes happening somewhere far, far away (maybe with links to real world news stories). Whenever a player sells a certain number of shares with a sufficiently high margin, they are mentioned in that day's in-game financial news. Since the number of stock options is very limited, players will inevitably collide in their efforts to manipulate the market in their favor. Hopefully, it will also be a fun side-arena for guilds and covenants to fight each other. Question(s) What do you think of this mechanism given the criteria for peripheral game mechanics that I specified for my game? Do you have any ideas how the mechanic could be improved with regard to these criteria (or otherwise)? Could it be improved to allow for more expressive game-play, or involve an allusion to some other real world madness (like short selling, leveraging, or some other banking magic)? Are there any game-theoretic problems with this mechanic, like maybe certain dominant individual strategies that, collectively, lead to every player profiting and thus eliminating the idea of market manipulation PVP? Also, if you like (or dislike) this question, feel free to participate in the discussion on GDSE meta: "Should we be more lax with regard to SE's question/answer format to make game design questions possible?"

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • GrapeCity &amp; ComponentOne Merger

    - by Jim Duffy
    Big news in the software component industry today… my good friends over at GrapeCity have unofficially announced (official announcement is tomorrow June 11, 2012) that they have acquired ComponentOne and will be merging the two companies. Yes, the people who bring you Spread.Net and ActiveReports have merged with one of the market’s leading developer component vendors and will continue to do business under the ComponentOne brand. I think this move will propel the company to new heights in the component market and I look forward to working with them as they continue being an industry leader. Congratulations guys! UPDATE: Additional information available here: http://our.componentone.com/2012/06/06/thenewcomponentone/. The new company will be called ComponentOne, a Division of GrapeCity.

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Streaming desktop with avconv - severe sound issues

    - by Tommy Brunn
    I'm trying to do some live streaming in Ubuntu 12.10, but I'm having some problems with audio. More specifically, the quality is complete garbage and it's at least 10 seconds out of sync with the video. I'm using an excellent guide found here to set up my loopback devices so that I can combine the desktop audio with the microphone input. It seems to work, as I'm able to stream both audio and video to Twitch.tv. But, as I said, the audio quality is terrible. The microphone audio is very, very low, but if I increase it, I get a horrible garbled sound that is absolutely unbearable. Nothing like that is present during VoIP calls or when recording sound alone with the sound recorder, so it's not an issue with the microphone itself. The entire audio stream is also delayed about 10-15 seconds compared to the video stream. I put together an imgur album of my settings. Here is some example output from when I'm streaming: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [x11grab @ 0x162fd80] device: :0.0+570,262 -> display: :0.0 x: 570 y: 262 width: 1280 height: 720 [x11grab @ 0x162fd80] shared memory extension found [x11grab @ 0x162fd80] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0+570,262': Duration: N/A, start: 1353181686.735113, bitrate: 884736 kb/s Stream #0.0: Video: rawvideo, bgra, 1280x720, 884736 kb/s, 30 tbr, 1000k tbn, 30 tbc [alsa @ 0x163fce0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x163fce0] Estimating duration from bitrate, this may be inaccurate Input #1, alsa, from 'pulse': Duration: N/A, start: 1353181686.773841, bitrate: N/A Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' [buffer @ 0x1641ec0] w:1280 h:720 pixfmt:bgra [scale @ 0x1642480] w:1280 h:720 fmt:bgra -> w:852 h:480 fmt:yuv420p flags:0x4 [libx264 @ 0x165ae80] VBV maxrate unspecified, assuming CBR [libx264 @ 0x165ae80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x165ae80] profile Main, level 3.1 [libx264 @ 0x165ae80] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=30 rc=cbr mbtree=1 bitrate=712 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=712 vbv_bufsize=512 nal_hrd=none ip_ratio=1.25 aq=1:1.00 Output #0, flv, to 'rtmp://live.justin.tv/app/live_23011330_Pt1plSRM0z5WVNJ0QmCHvTPmpUnfC4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuv420p, 852x480, q=-1--1, 712 kb/s, 1k tbn, 30 tbc Stream #0.1: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 712 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 32 fps= 31 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 40 fps= 23 q=29.0 size= 44kB time=0.03 bitrate=13786.2kbits/s dup=frame= 47 fps= 21 q=31.0 size= 93kB time=2.73 bitrate= 277.7kbits/s dup=0frame= 62 fps= 23 q=29.0 size= 160kB time=3.23 bitrate= 406.2kbits/s dup=0frame= 77 fps= 24 q=23.0 size= 209kB time=3.71 bitrate= 462.5kbits/s dup=0frame= 92 fps= 25 q=20.0 size= 267kB time=4.91 bitrate= 445.2kbits/s dup=0frame= 107 fps= 25 q=20.0 size= 318kB time=5.41 bitrate= 482.1kbits/s dup=0frame= 123 fps= 26 q=18.0 size= 368kB time=5.96 bitrate= 505.7kbits/s dup=0frame= 139 fps= 26 q=16.0 size= 419kB time=6.48 bitrate= 529.7kbits/s dup=0frame= 155 fps= 27 q=15.0 size= 473kB time=7.00 bitrate= 553.6kbits/s dup=0frame= 170 fps= 27 q=14.0 size= 525kB time=7.52 bitrate= 571.7kbits/s dup=0 frame= 180 fps= 25 q=-1.0 Lsize= 652kB time=7.97 bitrate= 670.0kbits/s dup=0 drop=32 //Here I stop the streaming video:531kB audio:112kB global headers:0kB muxing overhead 1.345945% [libx264 @ 0x165ae80] frame I:1 Avg QP:30.43 size: 39748 [libx264 @ 0x165ae80] frame P:45 Avg QP:11.37 size: 11110 [libx264 @ 0x165ae80] frame B:134 Avg QP:15.93 size: 27 [libx264 @ 0x165ae80] consecutive B-frames: 0.6% 0.0% 1.7% 97.8% [libx264 @ 0x165ae80] mb I I16..4: 7.3% 0.0% 92.7% [libx264 @ 0x165ae80] mb P I16..4: 0.1% 0.0% 0.1% P16..4: 49.1% 1.2% 2.1% 0.0% 0.0% skip:47.4% [libx264 @ 0x165ae80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:99.9% L0:42.5% L1:56.9% BI: 0.6% [libx264 @ 0x165ae80] coded y,uvDC,uvAC intra: 82.3% 87.4% 71.9% inter: 7.1% 8.4% 7.0% [libx264 @ 0x165ae80] i16 v,h,dc,p: 27% 29% 16% 28% [libx264 @ 0x165ae80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 21% 14% 8% 8% 8% 7% 5% 7% [libx264 @ 0x165ae80] i8c dc,h,v,p: 47% 22% 20% 11% [libx264 @ 0x165ae80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x165ae80] ref P L0: 96.4% 3.6% [libx264 @ 0x165ae80] kb/s:474.19 Received signal 2: terminating. Any ideas on how I can resolve this? The video delay is perfectly acceptable, so I wouldn't think that it's a network issue that's causing the delay in the audio. Any help would be appreciated.

    Read the article

  • Questions related to writing your own file downloader using multiple threads java

    - by Shekhar
    Hello In my current company, i am doing a PoC on how we can write a file downloader utility. We have to use socket programming(TCP/IP) for downloading the files. One of the requirements of the client is that a file(which will be large in size) should be transfered in chunks for example if we have a file of 5Mb size then we can have 5 threads which transfer 1 Mb each. I have written a small application which downloads a file. You can download the eclipe project from http://www.fileflyer.com/view/QM1JSC0 A brief explanation of my classes FileSender.java This class provides the bytes of file. It has a method called sendBytesOfFile(long start,long end, long sequenceNo) which gives the number of bytes. import java.io.File; import java.io.IOException; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileSender { private static final String FILE_NAME = "C:\\shared\\test.pdf"; public ByteArrayWrapper sendBytesOfFile(long start,long end, long sequenceNo){ try { File file = new File(FILE_NAME); byte[] fileBytes = FileUtils.readFileToByteArray(file); System.out.println("Size of file is " +fileBytes.length); System.out.println(); System.out.println("Start "+start +" end "+end); byte[] bytes = getByteArray(fileBytes, start, end); ByteArrayWrapper wrapper = new ByteArrayWrapper(bytes, sequenceNo); return wrapper; } catch (IOException e) { throw new RuntimeException(e); } } private byte[] getByteArray(byte[] bytes, long start, long end){ long arrayLength = end-start; System.out.println("Start : "+start +" end : "+end + " Arraylength : "+arrayLength +" length of source array : "+bytes.length); byte[] arr = new byte[(int)arrayLength]; for(int i = (int)start, j =0; i < end;i++,j++){ arr[j] = bytes[i]; } return arr; } public static long fileSize(){ File file = new File(FILE_NAME); return file.length(); } } Second Class is FileReceiver.java - This class receives the file. Small Explanation what this file does This class finds the size of the file to be fetched from Sender Depending upon the size of the file it finds the start and end position till the bytes needs to be read. It starts n number of threads giving each thread start,end, sequence number and a list which all the threads share. Each thread reads the number of bytes and creates a ByteArrayWrapper. ByteArrayWrapper objects are added to the list Then i have while loop which basically make sure that all threads have done their work finally it sorts the list based on the sequence number. then the bytes are joined, and a complete byte array is formed which is converted to a file. Code of File Receiver package com.filedownloader; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileReceiver { public static void main(String[] args) { FileReceiver receiver = new FileReceiver(); receiver.receiveFile(); } public void receiveFile(){ long startTime = System.currentTimeMillis(); long numberOfThreads = 10; long filesize = FileSender.fileSize(); System.out.println("File size received "+filesize); long start = filesize/numberOfThreads; List<ByteArrayWrapper> list = new ArrayList<ByteArrayWrapper>(); for(long threadCount =0; threadCount<numberOfThreads ;threadCount++){ FileDownloaderTask task = new FileDownloaderTask(threadCount*start,(threadCount+1)*start,threadCount,list); new Thread(task).start(); } while(list.size() != numberOfThreads){ // this is done so that all the threads should complete their work before processing further. //System.out.println("Waiting for threads to complete. List size "+list.size()); } if(list.size() == numberOfThreads){ System.out.println("All bytes received "+list); Collections.sort(list, new Comparator<ByteArrayWrapper>() { @Override public int compare(ByteArrayWrapper o1, ByteArrayWrapper o2) { long sequence1 = o1.getSequence(); long sequence2 = o2.getSequence(); if(sequence1 < sequence2){ return -1; }else if(sequence1 > sequence2){ return 1; } else{ return 0; } } }); byte[] totalBytes = list.get(0).getBytes(); byte[] firstArr = null; byte[] secondArr = null; for(int i = 1;i<list.size();i++){ firstArr = totalBytes; secondArr = list.get(i).getBytes(); totalBytes = concat(firstArr, secondArr); } System.out.println(totalBytes.length); convertToFile(totalBytes,"c:\\tmp\\test.pdf"); long endTime = System.currentTimeMillis(); System.out.println("Total time taken with "+numberOfThreads +" threads is "+(endTime-startTime)+" ms" ); } } private byte[] concat(byte[] A, byte[] B) { byte[] C= new byte[A.length+B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } private void convertToFile(byte[] totalBytes,String name) { try { FileUtils.writeByteArrayToFile(new File(name), totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } Code of ByteArrayWrapper package com.filedownloader; import java.io.Serializable; public class ByteArrayWrapper implements Serializable{ private static final long serialVersionUID = 3499562855188457886L; private byte[] bytes; private long sequence; public ByteArrayWrapper(byte[] bytes, long sequenceNo) { this.bytes = bytes; this.sequence = sequenceNo; } public byte[] getBytes() { return bytes; } public long getSequence() { return sequence; } } Code of FileDownloaderTask import java.util.List; public class FileDownloaderTask implements Runnable { private List<ByteArrayWrapper> list; private long start; private long end; private long sequenceNo; public FileDownloaderTask(long start,long end,long sequenceNo,List<ByteArrayWrapper> list) { this.list = list; this.start = start; this.end = end; this.sequenceNo = sequenceNo; } @Override public void run() { ByteArrayWrapper wrapper = new FileSender().sendBytesOfFile(start, end, sequenceNo); list.add(wrapper); } } Questions related to this code 1) Does file downloading becomes fast when multiple threads is used? In this code i am not able to see the benefit. 2) How should i decide how many threads should i create ? 3) Are their any opensource libraries which does that 4) The file which file receiver receives is valid and not corrupted but checksum (i used FileUtils of common-io) does not match. Whats the problem? 5) This code gives out of memory when used with large file(above 100 Mb) i.e. because byte array which is created. How can i avoid? I know this is a very bad code but i have to write this in one day -:). Please suggest any other good way to do this? Thanks Shekhar

    Read the article

  • Could not locate compojure in classpath

    - by Xian
    I am trying the various Getting started examples and I can get a basic hello world example working with basic HTML in the route as such (ns hello-world (:use compojure.core ring.adapter.jetty) (:require [compojure.route :as route])) (defroutes example (GET "/" [] "<h1>Hello World Wide Web!</h1>")) (run-jetty example {:port 8080}) But when I attempt to use the html helpers like so (ns hello-world (:use compojure ring.adapter.jetty) (:require [compojure.route :as route])) (defroutes example (GET "/" (html [:h1 "Hello World"]))) (run-jetty example {:port 8080}) Then I get the following error [null] Exception in thread "main" java.io.FileNotFoundException: Could not locate compojure__init.class or compojure.clj on classpath: (core.clj:1)

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • rotating bitmaps. In code.

    - by Marco van de Voort
    Is there a faster way to rotate a large bitmap by 90 or 270 degrees than simply doing a nested loop with inverted coordinates? The bitmaps are 8bpp and typically 2048*2400*8bpp Currently I do this by simply copying with argument inversion, roughly (pseudo code: for x = 0 to 2048-1 for y = 0 to 2048-1 dest[x][y]=src[y][x]; (In reality I do it with pointers, for a bit more speed, but that is roughly the same magnitude) GDI is quite slow with large images, and GPU load/store times for textures (GF7 cards) are in the same magnitude as the current CPU time. Any tips, pointers? An in-place algorithm would even be better, but speed is more important than being in-place. Target is Delphi, but it is more an algorithmic question. SSE(2) vectorization no problem, it is a big enough problem for me to code it in assembler Duplicates How do you rotate a two dimensional array?. Follow up to Nils' answer Image 2048x2700 - 2700x2048 Compiler Turbo Explorer 2006 with optimization on. Windows: Power scheme set to "Always on". (important!!!!) Machine: Core2 6600 (2.4 GHz) time with old routine: 32ms (step 1) time with stepsize 8 : 12ms time with stepsize 16 : 10ms time with stepsize 32+ : 9ms Meanwhile I also tested on a Athlon 64 X2 (5200+ iirc), and the speed up there was slightly more than a factor four (80 to 19 ms). The speed up is well worth it, thanks. Maybe that during the summer months I'll torture myself with a SSE(2) version. However I already thought about how to tackle that, and I think I'll run out of SSE2 registers for an straight implementation: for n:=0 to 7 do begin load r0, <source+n*rowsize> shift byte from r0 into r1 shift byte from r0 into r2 .. shift byte from r0 into r8 end; store r1, <target> store r2, <target+1*<rowsize> .. store r8, <target+7*<rowsize> So 8x8 needs 9 registers, but 32-bits SSE only has 8. Anyway that is something for the summer months :-) Note that the pointer thing is something that I do out of instinct, but it could be there is actually something to it, if your dimensions are not hardcoded, the compiler can't turn the mul into a shift. While muls an sich are cheap nowadays, they also generate more register pressure afaik. The code (validated by subtracting result from the "naieve" rotate1 implementation): const stepsize = 32; procedure rotatealign(Source: tbw8image; Target:tbw8image); var stepsx,stepsy,restx,resty : Integer; RowPitchSource, RowPitchTarget : Integer; pSource, pTarget,ps1,ps2 : pchar; x,y,i,j: integer; rpstep : integer; begin RowPitchSource := source.RowPitch; // bytes to jump to next line. Can be negative (includes alignment) RowPitchTarget := target.RowPitch; rpstep:=RowPitchTarget*stepsize; stepsx:=source.ImageWidth div stepsize; stepsy:=source.ImageHeight div stepsize; // check if mod 16=0 here for both dimensions, if so -> SSE2. for y := 0 to stepsy - 1 do begin psource:=source.GetImagePointer(0,y*stepsize); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(target.imagewidth-(y+1)*stepsize,0); for x := 0 to stepsx - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; // 3 more areas to do, with dimensions // - stepsy*stepsize * restx // right most column of restx width // - stepsx*stepsize * resty // bottom row with resty height // - restx*resty // bottom-right rectangle. restx:=source.ImageWidth mod stepsize; // typically zero because width is // typically 1024 or 2048 resty:=source.Imageheight mod stepsize; if restx>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(source.ImageWidth-restx,0); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(Target.imagewidth-stepsize,Target.imageheight-restx); for y := 0 to stepsy - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize*RowPitchSource); dec(ptarget,stepsize); end; end; if resty>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(0,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,0); for x := 0 to stepsx - 1 do begin for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; if (resty>0) and (restx>0) then begin // another loop less, since only one block psource:=source.GetImagePointer(source.ImageWidth-restx,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,target.ImageHeight-restx); for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; end; end;

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >