Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 104/299 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Canonicalization (On Page Optimization of Duplicate Contents)

    In this article we have discussed about the process of canonicalization that was officially announced by search engines last year to optimize website to avoid indexing of duplicate contents. Further we have discussed a scenario with a real life example in which canonicalization can be helpful and then we have seen its syntax and usage.

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required.

    Read the article

  • L'utilisation de Facebook aurait un impact négatif sur notre humeur, selon des études

    L'utilisation de Facebook aurait un impact négatif sur notre humeur, selon des études Plusieurs études menées dans l'univers Facebook montrent que le numéro un des réseaux sociaux, qui compte plus d'un milliard de personnes connectées, ne les rend pas nécessairement plus heureuses. La revue scientifique PLOS ONE a publié les résultats d'une enquête menée par une équipe de chercheurs de l'université du Michigan (USA) en collaboration avec l'université de Leuven (Belgique) qui démontre que l'usage...

    Read the article

  • SQL Server 2008 System Functions to Monitor the Instance, Database, Files, etc.

    SQL Server provides several system meta data functions which allow users to obtain property values of different SQL Server objects and securables. Although you can also use the SQL Server catalog views or Dynamic Management Views to obtain much of this information, in some circumstances the system meta data functions simplify the process. In this tip I am going to demonstrate some of the available system meta data functions and their usage in different scenarios.

    Read the article

  • Website Development at a glance!

    Technology has made Website Development an easy process. The web is meant for both the developers and the users. The better usage of the web and the rising number of websites are a sign of this.

    Read the article

  • Handling SQL Server Errors

    This article covers the basics of TRY CATCH error handling in T-SQL introduced in SQL Server 2005. It includes the usage of common functions to return information about the error and using the TRY CATCH block in stored procedures and transactions.

    Read the article

  • Website Development at a glance!

    Technology has made Website Development an easy process. The web is meant for both the developers and the users. The better usage of the web and the rising number of websites are a sign of this.

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Mount - Unable to find suitable address

    - by Benny
    I am trying to mount my Windows share through my Ubuntu box (no xwindow), but I continue to get Unable to find suitable address I have tried using the raw IP address, I have checked the credentials, I have disabled the Windows firewall, but I cannot find anything wrong. benny@backup:~$ sudo mount -t cifs //my-desk/j -o username=me,password=s)mePasss /mnt/sync Unable to find suitable address. benny@backup:~$ ping my-desk PING my-desk (10.10.10.43) 56(84) bytes of data. ? --- my-desk ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1008ms benny@backup:~$ sudo mount -t cifs //10.10.10.43/j -o username=me,password=s)mePasss /mnt/sync Unable to find suitable address. Any help would be greatly appreciated!

    Read the article

  • Computer Science or Computer Engineering for Data Science and Machine Learning

    - by ATMathew
    I'm a 25 year old data consultant who is considering returning to school to get a second bachelors degree in computer science or engineering. My interest is data science and machine learning. I use programming as a means to an end, and use languages like Python, R, C, Java, and Hadoop to find meaning in large data sets. Would a computer science or computer engineering degree be better for this? I realize that a statistics degree may be even more beneficial, but I'll be at a school which dosn't have a stats department or a computational math department.

    Read the article

  • Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers?

    - by semenko
    I'm looking to move my domains away from GoDaddy, but I'm having a tough time finding anyone with comparable features at a (even remotely) similar price. I've looked at the usual suggestions (NameCheap, Gandi.net, etc.), but they all seem to lack many of the GoDaddy feature base. I'm looking for: DNSSEC IPv6 Resolvers (dig pdns01.domaincontrol.com AAAA; etc. ) SSL-Logins by default HTTP-only login cookies No stupid password restrictions Two-factor authentications No DNS record limits Rough DNS statistics (queries/day, etc.) Audit trails GoDaddy has all of these, except two-factor, for $3/month. See http://www.godaddy.com/domains/dns-hosting.aspx I can't seem to find any other registrar that supports even a few of these. Is there a registrar that offers comparable features? Or, barring that, a DNS hosting service that offers similar features? (AWS Route53 doesn't offer DNSSEC or IPv6)

    Read the article

  • SQL SERVER – Pending IO request in SQL Server – DMV

    - by pinaldave
    I received following question: “How do we know how many pending IO requests are there for database files (.mdf, .ldf) individually?” Very interesting question and indeed answer is very interesting as well. Here is the quick script which I use to find the same. It has to be run in the context of the database for which you want to know pending IO statistics. USE DATABASE GO SELECT vfs.database_id, df.name, df.physical_name ,vfs.FILE_ID, ior.io_pending FROM sys.dm_io_pending_io_requests ior INNER JOIN sys.dm_io_virtual_file_stats (DB_ID(), NULL) vfs ON (vfs.file_handle = ior.io_handle) INNER JOIN sys.database_files df ON (df.FILE_ID = vfs.FILE_ID) I keep this script handy as it works like magic every time. If you use any other script please post here and I will post it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Learning MVC for a JSP Resource and ASP.Net WebForms Resource

    - by Lijo
    Statement from a colleque: - "People with ASP.Net WebForms skills should be able to learn it easily as the fundamental concept is same.” Consider two people –one from JSP background and other from ASP.Net WebForms background. Now both need to learn ASP.Net MVC in RAZOR. Do you think the person from ASP.Net Webforms background has significant advantage over the person from JSP background? My feeling is – it is equally difficult for JSP person and ASP.Net Webforms person to learn MVC with RAZOR. What is your take on it? Any statistics that you can provide for this?

    Read the article

  • Ubuntu 12.10 is reading mouse battery as a laptop battery

    - by Ross Fleming
    I have a bluetooth mouse connected to my laptop. Much to my delight (I posted the suggestion!) the mouses battery is now displayed along side the laptop's battery. Unfortunately is is named simply "battery" in the drop down and "Laptop battery" once the power statistics windows is opened. The details of this battery correctly say that the model is a Lenovo Bluetooth Mouse, but it is still displayed as a "battery" and "laptop battery". Is there anyway to change this? Edit: Plus the "Show time in menu bar" shows the mouse's remaining battery percentage as opposed to the laptop battery's remaining time.

    Read the article

  • Over a million COBOL programmers in the world?

    - by Lucas McCoy
    I think I heard on a previous StackOverflow podcast that COBOL was used as the programming language for traffic lights (or something like that), so this got me interested. I did a quick Google search and found this little article: Today, Cobol is everywhere, yet largely unheard of by millions of people who interact with it daily when using the ATM, stopping at traffic lights or buying a product online. The statistics on Cobol attest to its huge influence on the business world: There are over 220 billion lines of Cobol in existence, a figure which equates to about 80 per cent of the world’s actively used code. There over a million Cobol programmers in the world. There are 200 times as many Cobol transactions that take place each day than Google searches. I didn't really trust the source seeing as how it's on some random PHPBB forum. So how accurate are these figures? Are there really 220 billion lines of COBOL? I assume a few people/companies still use COBOL, but how many?

    Read the article

  • Scheduling Jobs in SQL Server Express

    As we all know SQL Server 2005 Express is a very powerful free edition of SQL Server 2005. However it does not contain SQL Server Agent service. Because of this scheduling jobs is not possible. So if we want to do this we have to install a free or commercial 3rd party product. This usually isn't allowed due to the security policies of many hosting companies and thus presents a problem. Maybe we want to schedule daily backups, database reindexing, statistics updating, etc. This is why I wanted to have a solution based only on SQL Server 2005 Express and not dependent on the hosting company. And of course there is one based on our old friend the Service Broker.

    Read the article

  • What good Social Networking Site solutions there is?

    - by ZetsubouWebmaster
    What good free Social Networking Site solutions there are? I tried many options but most of them are either too complicated, too simple, or just do not work... I tried: Dolphin, DZOIC-Handshakes, elgg, Oxwall, SocialEngine, and some plugins for wp and other cms. I don't need much, just: groups, chats, forums, profiles, PM, photos, pages, comments, search, statistics. Most of which included in pretty much every CMS out there... but not all... So, what good solutions there are? Also I don't mind paying some money (i guess no more then 200$), but I'd prefere if it was a free open source engine. Of course it should be php+mysql based.

    Read the article

  • Battery management doesn't recognize removal of power supply

    - by Jason
    I have a Lenovo Y460p running Ubuntu 12.04 (64-bit). The battery does charge normally, but unplugging the power supply only very briefly shows the correct battery indicator. After about 1 second, it reverts to the charging indicator. If the power supply is connected the power statistics show: "Supply Yes" "Online Yes" If it is not connected it shows: "Supply Yes" "Online No" My problem is almost exactly like the one in this post: Ubuntu 11.10 power management does not recognize removal of power supply The only exception is that my system does not dual-boot with Windows. This is Ubuntu only. The computer in the other post is a Lenovo as well; not sure if that has anything to do with it. Any help would be greatly appreciated. Thanks.

    Read the article

  • Where can I get a cheap database, no web hosting needed

    - by PhilipK
    I'm building an application which requires a fairly small online MySQL database. I don't need any web hosting. What are some cheap options for an online database? *Edit(a bit more about what I'll be using it for) The database itself is very small it contains market statistics for 5 weeks of time. Once a week the data will be updated, so that it always contains the most recent 5 weeks. Then I will use the data in that to create an XML file which is generated with PHP. The XML file will need to be accessed hundreds-thousands of times per month.*

    Read the article

  • Java 7 Adoption at 79%

    - by Henrik Stahl
    According to a recent blog post from the cloud hosting company Jelastic, Java 7 adoption on their platform is now at 79%. While this is a single data point and should not be read too broadly, it does match other indicators we have that Java 7 is picking up, such as uptake among Oracle middleware customers, download statistics and online activity. The spike in adoption in April coincided with the release of JDK 7 Update 4. This is in line with our expectations since that release added Mac OS X support as well as java.com moving to Java 7 as the default download for end-users; two events that marked the maturity of Java 7 to the community. Since the original release of Java 7, Oracle has shipped 7 update releases, added ports to Mac OSX and Linux/ARM and expanded JavaFX to all common desktop platforms.

    Read the article

  • Can't load some domain

    - by Grzegorz
    I can't load some domain on my ubuntu 13.04: youtube.com translate.google.com drive.google.com vk.com When I ping them they are responding: PING youtube.com (217.119.79.59) 56(84) bytes of data. 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=1 ttl=59 time=21.9 ms 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=2 ttl=59 time=20.5 ms 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=3 ttl=59 time=21.3 ms --- youtube.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 20.544/21.255/21.914/0.560 ms But they don't load in any browser like firefox, chrome, lynx. They load in my media centre on XBMCbuntu.

    Read the article

  • Save match details to SQLite or XML?

    - by trizz
    I'm making a (conceptual) system to simulate any kind of sports match (like soccer,basketball,etc) with actions (for example pass,pass,out,pass,score) so it will be like a real report. The main statistics (play time, number of actions etc.) I'm saving to a MySQL database, but the report itself, can contain more than 1000 actions per match. To avoid millions of records in my database I'm thinking of saving the detailled report in a SQLite database or a XML file. For every match played, a file will be created. When a user request the match details, I read the file for details. What is the best choice for this purpose? SQLite or XML?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >