Search Results

Search found 1599 results on 64 pages for 'carlos fernandez san millan'.

Page 15/64 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • It Ain't Over 'Til It's Over

    - by Oracle OpenWorld Blog Team
    Oracle OpenWorld 2012 is behind us. Well, for San Francisco, anyhow. The team is already working on the Latin America event which takes place in December in Sao Paulo, and an OpenWorld in Asia for 2013 as well. And of course they're already working on the next San Francisco OpenWorld for 2013. So what happens after the conference is over? People pack up demo and network gear and ship it out to wherever it's going next; take down and recycle signage; strike the keynote set, the exhibition and demo halls, the street tents, and anything else that was constructed just for the conference. There's a lot of post-conference analyis going on too. Oracle and partner marketing teams are looking at and following up on the leads they got from booth, demo, and lounge traffic. The events team is evaluating the session and conference surveys you filled out if you attended -- looking to identify the best speakers, what worked and didn't work, how you liked the venues, the food, the entertainment, the presentations. From all of that information will come recommendations for next year on what to keep doing, what to do better, and what not to do at all. The goal for each year's conference is to be better than last year's. If you attended and haven't filled out the surveys yet, you have until October 19 for them to be counted, and for you to be entered into a daily sweepstakes. Click here for more information. Posts to this blog will slow down for a while, but we'll post news about Oracle OpenWorld in San Francisco and around the world when we have it. Any suggestions about future blog topics are welcome. Oh - I forgot to mention that you can sign up to be notified when registration for Oracle OpenWorld 2013 goes live. If you register at that time you'll get the best discount available on attending next year. So sign up, and stay tuned.

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Easy and Rapid Deployment of Application Workloads with Oracle VM

    - by Antoinette O'Sullivan
    Oracle VM is designed for easy and rapid deployment of application workloads. In addition to allowing for rapid deployment of an entire application stack, Oracle VM now gives administrators more fine-grained control of the application payloads inside the virtual machine. To get started on Oracle VM Server for x86 or Oracle VM Server fo SPARC, what better solution than to take the corresponding training course. You can take this training from your own desk, by choosing from a selection of live-virtual events already on the schedule on the Oracle University Portal. Alternatively, you can travel to an education center to take these courses. Below is a selection of in-class events already on the schedule for each course: Oracle VM Administration: Oracle VM Server for x86  Location  Date  Delivery Language  Paris, France  11 December 2013  French  Rome, Italy  22 April 2014  Italian  Budapest, Hungary  4 November 2013  Hungarian  Riga, Latvia  3 February 2014  Latvian  Oslo, Norway  9 December 2013  English  Warsaw, Poland  12 February 2014  Polish  Ljubjana, Slovenia  25 November 2013 Slovenian   Barcelona, Spain  29 October 2013  Spanish  Istanbul, Turkey  23 December 2013  Turkish  Cairo, Egypt  1 December 2013  Arabic  Johannesburg, South Africa  9 December 2013   English   Melbourne, Australia  12 February 2014  English  Sydney, Australia  25 November 2013   English   Singapore 27 November 2013    English   Montreal, Canada 18 February 2014  English  Ottawa, Canada  18 February 2014  English  Toronto, Canada  18 February 2014  English  Phoenix, AZ, United States  18 February 2014   English   Sacramento, CA, United States 18 February 2014    English   San Francisco, CA, United States 18 February 2014   English  San Jose, CA, United States  18 February 2014  English  Denver, CO, United States 22 January 2014   English  Roseville, MN, United States 10 February 2014    English   Edison, NJ, United States  18 February 2014  English  King of Prussia, PA, United States  18 February 2014  English  Reston, VA, United States  26 March 2014  English Oracle VM Server for SPARC: Installation and Configuration  Location  Date  Delivery Language  Prague, Czech Republic  2 December 2013  Czech  Paris, France  9 December 2013  French  Utrecht, Netherlands  9 December 2013  Dutch  Madrid, Spain  28 November 2013  Spanish  Dubai, United Arab Emirates  5 February 2014  English  Melbourne, Australia  31 October 2013  English  Sydney, Australia  10 February 2014  English  Tokyo, Japan  6 February 2014  Japanese  Petaling Jaya, Malaysia  23 December 2013  English  Auckland, New Zealand  21 November 2013  English  Singapore  7 November 2013  English  Toronto, Canada  25 November 2013  English  Sacramento, CA, United States  2 December 2013  English  San Francisco, CA, United States  2 December 2013  English  San Jose, CA, United States  2 December 2013  English  Caracas, Venezuela 5 November 2013   Spanish

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • How deserealizing JSON with GSON

    - by loko
    I have one result of APPI http://developer.yahoo.com/geo/placefinder/guide/examples.html, I need to deserealizing the result JSON of example only with GSON http://where.yahooapis.com/geocode?location=San+Francisco,+CA&flags=J&appid=yourappid But i dont now have to do the class for deserealizing one JSON with array This is the reponse: {"ResultSet": {"version":"1.0", "Error":0, "ErrorMessage":"No error", "Locale":"en_US", "Quality":40, "Found":1, "Results":[ {"quality":40, "latitude":"37.779160", "longitude":"-122.420049", "offsetlat":"37.779160", "offsetlon":"-122.420049", "radius":5000, "name":"", "line1":"", "line2":"San Francisco, CA", "line3":"", "line4":"United States", "house":"", "street":"", "xstreet":"", "unittype":"", "unit":"", "postal":"", "neighborhood":"", "city":"San Francisco", "county":"San Francisco County", "state":"California", "country":"United States", "countrycode":"US", "statecode":"CA", "countycode":"", "uzip":"94102", "hash":"C1D313AD706E3B3C", "woeid":12587707, "woetype":9}] } } Im trying to deserealizing of this way but i couldn´t do that, please help me to do the correct class to get the JSON with GSON. public class LocationAddress { private ResultSet resultset; public static class ResultSet{ private String version; private String Error; private String ErrorMessage; private List<Results> results; } public static class Results{ private String quality; private String latitude; private String longitude; public String getQuality() { return quality; } public void setQuality(String quality) { this.quality = quality; } public String getLatitude() { return latitude; } public void setLatitude(String latitude) { this.latitude = latitude; } public String getLongitude() { return longitude; } public void setLongitude(String longitude) { this.longitude = longitude; } } }

    Read the article

  • The People Who Support Linux

    <b>Linux.com: </b>"The Linux Foundation's individual members help to support the work of Linux creator Linus Torvalds and other important activities that advance Linux, while getting a variety of other fun and valuable benefits. The series begins with Matthew Fernandez, a senior application developer based in Sydney, Australia. Matthew has been using Linux since 2001 and just recently became a Linux Foundation member."

    Read the article

  • GDD-BR 2010 [2H] Earn Money from your Mobile App with AdMob

    GDD-BR 2010 [2H] Earn Money from your Mobile App with AdMob Speakers: Peter Fernandez Track: Google APIs Time slot: H [17:20 - 18:05] Room: 2 Level: 101 We'll show you different strategies for monetizing your app with AdMob ads and help you figure out how much you can earn. We'll also share enlightening data on the growth of the Android, iPhone and iPad platforms. From: GoogleDevelopers Views: 0 0 ratings Time: 20:43 More in Science & Technology

    Read the article

  • Doctrine 2 Cannot find entites

    - by Flyn San
    I'm using Kohana 3 and have a /doctrine/Entites folder with my entities inside. When executing the code $product = Doctrine::em()->find('Entities\Product', 1); in my controller, I get the error class_parents(): Class Entities\Product does not exist and could not be loaded Below is the Controller (classes/controller/welcome.php): <?php class Controller_Welcome extends Controller { public function action_index() { $prod = Doctrine::em()->find('Entities\Product', 1); } } Below is the Entity (/doctrine/Entities/Product.php): <?php /** * @Entity * @Table{name="products"} */ class Product { /** @Id @Column{type="integer"} */ private $id; /** @Column(type="string", length="255") */ private $name; public function getId() { return $this->id; } public function setId($id) { $this->id = intval($id); } public function getName() { return $this->name; } public function setName($name) { $this->name = $name; } } Below is the Doctrine module bootstrap file (/modules/doctrine/init.php): class Doctrine { private static $_instance = null; private $_application_mode = 'development'; private $_em = null; public static function em() { if ( self::$_instance === null ) self::$_instance = new Doctrine(); return self::$_instance->_em; } public function __construct() { require __DIR__.'/classes/doctrine/Doctrine/Common/ClassLoader.php'; $classLoader = new \Doctrine\Common\ClassLoader('Doctrine', __DIR__.'/classes/doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Symfony', __DIR__.'/classes/doctrine/Doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Entities', APPPATH.'doctrine'); $classLoader->register(); //Set up caching method $cache = $this->_application_mode == 'development' ? new \Doctrine\Common\Cache\ArrayCache : new \Doctrine\Common\Cache\ApcCache; $config = new Configuration; $config->setMetadataCacheImpl( $cache ); $driver = $config->newDefaultAnnotationDriver( APPPATH.'doctrine/Entities' ); $config->setMetadataDriverImpl( $driver ); $config->setQueryCacheImpl( $cache ); $config->setProxyDir( APPPATH.'doctrine/Proxies' ); $config->setProxyNamespace('Proxies'); $config->setAutoGenerateProxyClasses( $this->_application_mode == 'development' ); $dbconf = Kohana::config('database'); $dbconf = reset($dbconf); //Use the first database specified in the config $this->_em = EntityManager::create(array( 'dbname' => $dbconf['connection']['database'], 'user' => $dbconf['connection']['username'], 'password' => $dbconf['connection']['password'], 'host' => $dbconf['connection']['hostname'], 'driver' => 'pdo_mysql', ), $config); } } Any ideas what I've done wrong?

    Read the article

  • Error while configuring Sql server 2005 for full text search

    - by San
    I am trying to index a table for full text search on a SQL server 2005. When I select the change tracking as Automatic and click on the next button, I get the following error TITLE: Microsoft SQL Server This wizard will close because it encountered the following error: For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server+Management+Studio&ProdVer=9.00.4035.00&EvtSrc=Microsoft.SqlServer.Management.UI.WizardFrameworkErrorSR&EvtID=UncaughtException&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476 An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The EXECUTE permission was denied on the object 'sp_help_category', database 'msdb', schema 'dbo'. The SELECT permission was denied on the object 'sysjobs_view', database 'msdb', schema 'dbo'. (Microsoft SQL Server, Error: 229) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4035&EvtSrc=MSSQLServer&EvtID=229&LinkId=20476 ------------------------------ BUTTONS: OK

    Read the article

  • Configuring Sql server 2005 for full text search

    - by San
    I am trying to index a table for full text search on a SQL server 2005. When I select the change tracking as Automatic and click on the next button, I get the following error TITLE: Microsoft SQL Server This wizard will close because it encountered the following error: For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server+Management+Studio&ProdVer=9.00.4035.00&EvtSrc=Microsoft.SqlServer.Management.UI.WizardFrameworkErrorSR&EvtID=UncaughtException&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476 An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The EXECUTE permission was denied on the object 'sp_help_category', database 'msdb', schema 'dbo'. The SELECT permission was denied on the object 'sysjobs_view', database 'msdb', schema 'dbo'. (Microsoft SQL Server, Error: 229) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4035&EvtSrc=MSSQLServer&EvtID=229&LinkId=20476 ------------------------------ BUTTONS: OK

    Read the article

  • How to find first non-repetitive character from a string?

    - by masato-san
    I've spent half day trying to figure out this and finally I got working solution. However, I feel like this can be done in simpler way. I think this code is not really readable. Problem: Find first non-repetitive character from a string. $string = "abbcabz" In this case, the function should output "c". The reason I use concatenation instead of $input[index_to_remove] = '' in order to remove character from a given string is because if I do that, it actually just leave empty cell so that my return value $input[0] does not not return the character I want to return. For instance, $str = "abc"; $str[0] = ''; echo $str; This will output "bc" But actually if I test, var_dump($str); it will give me: string(3) "bc" Here is my intention: Given: input while first char exists in substring of input { get index_to_remove input = chars left of index_to_remove . chars right of index_to_remove if dupe of first char is not found from substring remove first char from input } return first char of input Code: function find_first_non_repetitive2($input) { while(strpos(substr($input, 1), $input[0]) !== false) { $index_to_remove = strpos(substr($input,1), $input[0]) + 1; $input = substr($input, 0, $index_to_remove) . substr($input, $index_to_remove + 1); if(strpos(substr($input, 1), $input[0]) == false) { $input = substr($input, 1); } } return $input[0]; }

    Read the article

  • Updating CoreData xcdatamodel file troubles - attribute type change

    - by San
    I noticed several questions related to this topic go unanswered. Is this such a gray area that nobody really understands it? Here is my problem: I am a midway in the development of my app and the app has never been used ouside of my iphone simulator.One of the attributes in my core data structure requires a type change.Since my app has never been used outside of my iPhone Simulator, I first deleted the sqlite file. Doubling the effort of this step, I also went into iPhone Simulator menu and selected "Reset Content and Settings...". Than, I edited the xcdatamodel file and changed the type of my attribute. I saved the file and exited. Without any other changes, I compiled. I expected it to fail because of my type change. It did not! After this, I assigned a value with new type to my attribute and it fails to compile?! Is there something else that I need to do for the change to take an effect? I would really, really appreciate an answer to my question. Thank you!

    Read the article

  • How to use mock and verify methods of OCMock in objective-C ? Is there any good tutorial on OCMock i

    - by san
    My problem is I am getting an error: OCMckObject[NSNumberFormatter]: expected method was not invoked:setAllowsFloats:YES I have written following Code: (void) testReturnStringFromNumber { id mockFormatter = [OCMockObject mockForClass:[NSNumberFormatter class]]; StringNumber *testObject = [[StringNumber alloc] init]; [[mockFormatter expect] setAllowsFloats:YES]; [testObject returnStringFromNumber:80.23456]; [mockFormatter verify]; } @implementation StringNumber - (NSString *) returnStringFromNumber:(float)num { NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init]; [formatter setAllowsFloats:YES]; NSString *str= [formatter stringFromNumber:[NSNumber numberWithFloat:num]]; [formatter release]; return str; } @end

    Read the article

  • Many-to-Many Relationship mapping does not trigger the EventListener OnPostInsert or OnPostDelete Ev

    - by san
    I'm doing my auditing using the Events listeners that nHibernate provides. It works fine for all mappings apart from HasmanyToMany mapping. My Mappings are as such: Table("Order"); Id(x => x.Id, "OrderId"); Map(x => x.Name, "OrderName").Length(150).Not.Nullable(); Map(x => x.Description, "OrderDescription").Length(800).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Products) .Table("OrderProduct") .ParentKeyColumn("OrderId") .ChildKeyColumn("ProductId") .Cascade.None() .Inverse() .AsSet(); Table("Product"); Id(x => x.Id, "ProductId"); Map(x => x.ProductName).Length(150).Not.Nullable(); Map(x => x.ProductnDescription).Length(800).Not.Nullable(); Map(x => x.Amount).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); ; Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Orders) .Table("OrderProduct") .ParentKeyColumn("ProductId") .ChildKeyColumn("OrderId") .Cascade.None() .AsSet(); Whenever I do an update of an order (Eg: Changed the Orderdescription and deleted one of the products associated with it) It works fine as in it updated the order table and deletes the row in the orderproduct table. the event listener that I have associated with it captures the update of the order table but does NOT capture the associated delete event when the orderproduct is deleted. This behaviour is observed only in case of a ManyTomany mapped relationships. Since I would also like audit the packageproduct deletion, its kind of an annoyance when the event listener aren't able to capture the delete event. Any information about it would be greatly appreciated.

    Read the article

  • Is it possible for parent window to notice if child window has been closed ???

    - by masato-san
    I have parent window (opener) and child (popup) ---------- -------------- | | | | | parent | -----> opens popup | child | | | | | ----------- -------------- Let's say, in parent page, I have js function hello() In order for child to call parent's hello() when the child window is closed and also pass an argument, I can do, window.close(); window.opener.hello(someArgument); This will close the window and also call parent's hello(); But what if I don't want to have the code window.opener.hello() in child page? I mean I want the code to be in parent page only One thing I can think of is: Somewhat parent knows when the child is closed (event listenr??? not sure in js) But in such case how to receive the argument? (i.e. some data back from the child)

    Read the article

  • QT: I've inherited from QTreeView. I've inherited from QStandardItem. How do i Style the items?

    - by San Jacinto
    My Google skills must be failing me today. I've inherited from QTreeView to create a TreeView that stores a QStandardItemModel instead of a QAbstractItemModel. I have also inherited from QStandardItem to create a class to store my data in an item as is necessary. I've successfully inserted my derived QStandardItem into my derived QTreeView's QStandardItemModel. Now the trouble is, I can't figure out how to style it. I know that QTreeView has a setStyleSheet(QString) member, but I can't seem to get it working. It may be as simple as I'm not styling the correct attribute. Any pointers would be appreciated. Thanks. For clarity, here are my class defs. class SurveyTreeItem : public QStandardItem { public: SurveyTreeItem(); SurveyTreeItem( const QString & text ); ~SurveyTreeItem(); }; class StandardItemModelTreeView : public QTreeView { public: StandardItemModelTreeView(QWidget* parent = 0); ~StandardItemModelTreeView(); QStandardItemModel* getStandardItemModel(); }; I've tried the following StyleSheets: StandardTreeView::Item { font: 87 12pt 'Arial Black'; } StandardTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::Item { font: 87 12pt 'Arial Black'; } QTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; } StandardTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; }

    Read the article

  • How do I add a header with data to a QTableWidget in Qt?

    - by San Jacinto
    Hi, I'm still learning Qt and I am indebted to the SO community for providing me with great, very timely answers to my Qt questions. Thank you. I'm quite confused on the idea of adding a header to a QTableWidget. What I'd like to do is have a table that contains information about team members. Each row for a member should contain his first and last name, each in its own cell, an email address in one cell, and office in the other cell. I'd to have a header above these columns to name them as appropriate. I'm trying to start off easy and get just the header to display "Last" (as in last name). Here is my code. int column = m_ui-teamTableWidget-columnCount(); m_ui-teamTableWidget-setColumnCount(column+1); QString* qq = new QString("Last"); m_ui-teamTableWidget-horizontalHeader()-model()-setHeaderData(0, Qt::Horizontal, QVariant(QVariant::String, &qq)); My table gets rendered corretly, but the header doesn't contain what I would expect. It contains 1 cell that contains the text "1". I am obviously doing something very silly here that is wrong, but i am lost. I keep pouring over the documentation, finding nothing. Here are the documentation links to the function calls I am making for the very last line. http://doc.trolltech.com/4.5/qtableview.html#horizontalHeader http://doc.trolltech.com/4.5/qabstractitemview.html#model http://doc.trolltech.com/4.5/qabstractitemmodel.html#setHeaderData Thanks for any and all help. Edit: HOW I SOLVED THE PROBLEM Using some help from the accepted answer, I came up with the following code: m_ui-teamTableWidget-setColumnCount(m_ui-teamTableWidget-columnCount()+1); QTableWidgetItem* qtwi = new QTableWidgetItem(QString("Last"),QTableWidgetItem::Type); m_ui-teamTableWidget-setHorizontalHeaderItem(0,qtwi);

    Read the article

  • JPA optimistic lock - setting @Version to entity class cause query to include VERSION as column

    - by masato-san
    I'm using JPA Toplink Essential, Netbean6.8, GlassFish v3 In my Entity class I added @Version annotation to enable optimistic locking at transaction commit however after I added the annotation, my query started including VERSION as column thus throwing SQL exception. None of this is mentioned in any tutorial I've seen so far. What could be wrong? Snippet public class MasatosanTest2 implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "id") private Integer id; @Column(name = "username") private String username; @Column(name = "note") private String note; //here adding Version @Version int version; query used: SELECT m FROM MasatosanTest2 m Internal Exception: com.microsoft.sqlserver.jdbc.SQLServerException Call: SELECT id, username, note, VERSION FROM MasatosanTest2

    Read the article

  • How to Canonicalize a Stax XML object.

    - by Enrique San Martín
    Hello, i want to Canonicalize a Stax object, the program it's doing it with DOM, but dom can't manage big XML documents (like 1GB), so STAX it's the solution. The Code that i have it's: File file=new File("big-1gb.xml"); org.apache.xml.security.Init.init(); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); DocumentBuilder documentBuilder = dfactory.newDocumentBuilder(); Document doc = documentBuilder.parse(file); Canonicalizer c14n = Canonicalizer.getInstance("http://www.w3.org/TR/2001/REC-xml-c14n-20010315"); outputBytes = c14n.canonicalizeSubtree(doc.getElementsByTagName("SomeTag").item(0)); The idea it's do the code below with Stax... Thx :)

    Read the article

  • JPA native query join returns object but dereference throws class cast exception

    - by masato-san
    I'm using JPQL Native query to join table and query result is stored in List<Object[]>. public String getJoinJpqlNativeQuery() { String final SQL_JOIN = "SELECT v1.bitbit, v1.numnum, v1.someTime, t1.username, t1.anotherNum FROM MasatosanTest t1 JOIN MasatoView v1 ON v1.username = t1.username;" System.out.println("get join jpql native query is being called ============================"); EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getDefaultManager(); Query query = em.createNativeQuery(SQL_JOIN); out = query.getResultList(); System.out.println("return object ==========>" + out); System.out.println(out.get(0)); String one = out.get(0).toString(); //LINE 77 where ClassCastException System.out.println(one); } catch(Exception e) { } finally { if(em != null) { em.close; } } } The problem is System.out.println("return object ==========>" + out); outputs: return object ==========> [[true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020], [false, 0, 2010-12-21 15:32:53.0, koga, 0.213]] System.out.println(out.get(0)) outputs: [true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020] So I assumed that I can assign return value of out.get(0) which should be String: String one = out.get(0).toString(); But I get weird ClassCastException. java.lang.ClassCastException: java.util.Vector cannot be cast to [Ljava.lang.Object; at local.test.jaxrs.MasatosanTestResource.getJoinJpqlNativeQuery (MasatosanTestResource.java:77) So what's really going on? Even Object[] foo = out.get(0); would throw an ClassCastException :(

    Read the article

  • ora-00939 error in reporting services, SSRS

    - by san
    Hi, I have an SSRS report , Oracle is my backend and am using this following query for dataset of my second parameter. select distinct X from v_stf_sec_user_staffing_center usc where usc.center_group_id in ( select distinct center_group_id from V_T_STAFFING_CENTER_GROUP scg where INSTR(','||REPLACE(:PI_REGION_LIST,' ')||',', ','||scg.group_abbreviation||',') 0) and usc.nt_user_name=:PI_NT_USER_NAME Here PI_REGION_LIST is a multivalued parameter of string type. and PI_NT_USER_NAME is a default string valued parameter this query works fine when i try to execute in manulally in the Data tab , also in the Oracle tool. But when i run the report in SSRS and select more than 3 values for the parameter PI_REGION_LIST the report throws an error on this dataset, ora-00939 error,too many arguments for function. I am not able to figure out the error here. Please help me with an idea. Thanks in advance, Suni.

    Read the article

  • java: how to convert a file to utf8

    - by Enrique San Martín
    Hi, i have a file that have some non-utf8 caracters (like "ISO-8859-1"), and so i want to convert that file (or read) to UTF8 encoding, how i can do it? The code it's like this: File file = new File("some_file_with_non_utf8_characters.txt"); /* some code to convert the file to an utf8 file */ ... edit: Put an encoding example

    Read the article

  • How to set response header in JAX-RS so that user sees download popup for Excel?

    - by masato-san
    I wrote code that generate Excel file using REST JAX-RS and I confirmed that the generated Excel file is in GlassFish server directory. But my goal is when user click on the button (which generate Excel .xls), I want download popup to show up asking user whether to save or open the .xls file just like any other web services doing for downloading any type of files. According to my search, the step is: generate Excel .xls (DONE) write the excel to stream in JAX-RS file, set response header to something like, String fileName = "Blah_Report.xls"; response.setHeader("Content-Disposition", "attachment; filename=" + fileName); My question is I'm doing all of this in JAX-RS file and I don't have HttpServletResponse object available. According to the answer from Add Response Header to JAX-RS Webservice He says: You can inject a reference to the actual HttpServletResponse via the @Context annotation in your webservice and use addHeader() etc. to add your header. I can't really figure what exactly that means without sample code..

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >