Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 131/285 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Mysql not starting - innodb not found

    - by Rob Guderian
    I have a fresh install of ubuntu 12.04 server edition and mysql server is not starting properly. I did a simple apt-get install apt-get install mysql-server But, it's failing with this error message root@test:~# mysqld 120618 20:57:32 [Warning] The syntax '--log-slow-queries' is deprecated and will be removed in a future release. Please use '--slow-query-log'/'--slow-query-log-file' instead. 120618 20:57:32 [Note] Plugin 'FEDERATED' is disabled. 120618 20:57:32 InnoDB: The InnoDB memory heap is disabled 120618 20:57:32 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120618 20:57:32 InnoDB: Compressed tables use zlib 1.2.3.4 120618 20:57:32 InnoDB: Unrecognized value fdatasync for innodb_flush_method 120618 20:57:32 [ERROR] Plugin 'InnoDB' init function returned error. 120618 20:57:32 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120618 20:57:32 [ERROR] Unknown/unsupported storage engine: InnoDB 120618 20:57:32 [ERROR] Aborting I can start the server with the "--skip-innodb --default-storage-engine=myisam" flags, but would like to use innodb. Does anyone know what the issue here is?

    Read the article

  • How to programmatically construct textual query

    - by stibi
    Here is a query language, more specifically, it's JQL, you can use it in Jira, to search for issues, it's something like SQL, but quite simpler. My case is that, I need to construct such queries programmatically, in my application. Something like: JQLMachine jqlMachine = new JQLMachine() jqlMachine.setStatuses("Open", "In Progress") jqlMachine.setReporter("foouser", "baruser") jqlMachine.setDateRange(...) jqlMachine.getQuery() --> String with corresponding JQL query is returned You get my point I hope. I can imagine the code for this, but it's not nice, using my current knowledge how I'd do that. That's why I'm asking. What you'd advice to use to create such thing. I believe some patterns for creating something like this already exist and there is already best practices, how to do that in good way.

    Read the article

  • Behavior tree implementation details

    - by angryInsomniac
    I have been looking around for implementation details of behavior trees, the best descriptions I found were by Alex Champarand and some of Damian Isla's talk about AI in Halo 2 (the video of which is locked up in the GDC vault sadly). However, both descriptions fall short of helping one actually create a BT, one particular question has been bugging me for a while. When is the tree in a behavior tree evaluated? Furthermore: If the tree is in the middle of executing a sequence of actions (patrolling waypoints) and a higher priority impulse comes in (distraction sound) , how to switch to that side of the tree seamlessly without resorting to a state machine like system and if it is decided that the impulse was irrelevant (the distraction is too far away to affect this guard), how to go back to the last thing that the guard was doing ? I have quite a few questions like this and I don't wish to flood the board with separate queries so if you know of any resource where questions like these can be answered I would be very grateful.

    Read the article

  • Efficient way to check for changes to the contents of folders

    - by MrVimes
    I am creating an application that maintains a database of files of a certain type in a given folder (and all subfolders) Initially the program will recurse the folders and add any file it finds of that type to the database. I want the application to have the ability to re-scan the folder and add any files that were not there the last time the folders were scanned. It can't use the date created property of the file because there is a high chance of a file being added to the folders that isn't a new file. I am wondering what the most efficient way of doing this is, and if there is a way that doesn't involve checking each file is in the database already (which, if there are 5000 files would mean 5000 queries of a list 5000 items in size, or 25 million 'checks' for the sql engine to perform) I suppose a more specific question to acheive the same goal would be - is there a property of a file (in Microsoft Windows) that will reliably tell you when that file arrived in that folder.

    Read the article

  • Troubleshooting Blocked Transaction in SQL Server

    - by ChrisD
    While troubleshooting a blocked transaction issue recently, I found this code online.  My apologies in not citing its source, but its lost in my browse history some where.   While the transaction is executing and blocked, open a connection to the database containing the transaction and run the following to return both the SQL statement blocked (the Victim), as well as the statement that’s causing the block (the Culprit)   -- prepare a table so that we can filter out sp_who2 results DECLARE @who TABLE(BlockedId INT, Status VARCHAR(MAX), LOGIN VARCHAR(MAX), HostName VARCHAR(MAX), BlockedById VARCHAR(MAX), DBName VARCHAR(MAX), Command VARCHAR(MAX), CPUTime INT, DiskIO INT, LastBatch VARCHAR(MAX), ProgramName VARCHAR(MAX), SPID_1 INT, REQUESTID INT) INSERT INTO @who EXEC sp_who2 --select the blocked and blocking queries (if any) as SQL text SELECT ( SELECT TEXT FROM sys.dm_exec_sql_text( (SELECT handle FROM ( SELECT CAST(sql_handle AS VARBINARY(128)) AS handle FROM sys.sysprocesses WHERE spid = BlockedId ) query) ) ) AS 'Blocked Query (Victim)', ( SELECT TEXT FROM sys.dm_exec_sql_text( (SELECT handle FROM ( SELECT CAST(sql_handle AS VARBINARY(128)) AS handle FROM sys.sysprocesses WHERE spid = BlockedById ) query) ) ) AS 'Blocking Query (Culprit)' FROM @who WHERE BlockedById != ' .'

    Read the article

  • Map, Set use cases in a general web app

    - by user2541902
    I am currently working on my own Java web app (to be shown in interview to get a Java job). So I've not worked on Java in professional environment, so no guidance. I have database, entity classes, JPA relationships. Use cases are like, user has albums, album has pics, user has locations, location has co-ordinates etc. I used List (ArrayList) everywhere. I can do anything with List and DB, get some entry, find etc. For example, I will keep the list of users in List, then use queries to get some entry (why would I keep them in Map with id/email as key?). I know very well the working and features, implementing classes of Map, Set. I can use them for solving some algorithm, processing some data etc. In interviews, I get asked have you worked with these, where have you used them etc. So, Please tell me cases where they should be used (DB or any popular real use case).

    Read the article

  • PHP OOP: Am i following right way?

    - by sineverba
    I'm learning OOP (PHP). I've realized my own CRUD Class, that performs some kind of queries SQL. Btw, a Gasoline asked us to realize a smart, simple web-app where he can update prices of his gasoline (gasoline, diesel, lpg) and via an API i could recall them and display in his site. So, I did create a new Class Gasoline but it perform some methods of CRUD Class public function getPrezzoBenzina($id) { $prezzo_benzina = $this->distributore->sql('SELECT prezzo_benzina FROM prezzi WHERE id = '.$id); return $prezzo_benzina } And so on (code is pseudocode, just to explain). I could perform all my code only with help of Crud Class... without necessity of Class Gasoline. So, what I'm missing about OOP? Where am I wrong?

    Read the article

  • Question about server usage, big community platform

    - by Json
    I’m working on a community platform writen in PHP, MySQL. I have some questions about the server usage maybe someone can help me out. The community is based on JQuery with many ajax requests to update content. It makes 5 - 10 AJAX(Json, GET, POST) requests every 5 seconds, the requests fetch user data like user notifications and messages by doing mySQL queries. I wonder how a server will handle this when there are for more than 5000 users online. Then it will be 50.000 requests every 5 seconds, what kind of server you need to handle this? Or maybe even more, when there are 15.000 users online, 150.000 requests every 5 seconds. My webserver have the following specs. Xeon Quad 2048MB 5000GB traffic Will it be good enough, and for how many users? Anyone can help me out or know where to find such information, like make a calculation?

    Read the article

  • How to Create Reports in Microsoft Access 2010

    Reports are great ways to present information to parties who want to see relevant data in an organized format that can be easily analyzed. Microsoft Access 2010 allows you to create reports that not only make data more digestible, but also more presentable thanks to their professional look. A report's function comes from its ability to pull in or extract information from single or multiple tables or queries. It could be considered similar to a query in this sense, but what sets it apart is the way in which it presents the information in an easy to use format that you can define to fit your n...

    Read the article

  • bug fixing appproach

    - by Shirish11
    I have been working on a project comprising of databases. I recently received a bug report for the remote execution of some queries. Usually you try to find out the actual cause for the bug to occur and then fix it. But sometimes what I do when I'm fed up of doing some research (and can't find suitable information on the internet) is just change the logic, which takes me much less time compared to the other option. Is this approach of mine correct or should I try to fix the original bug involving more R&D?

    Read the article

  • Type Conversion in JPA 2.1

    - by delabassee
    The Java Persistence 2.1 specification (JSR 338) adds support for various new features such as schema generation, stored procedure invocation, use of entity graphs in queries and find operations, unsynchronized persistence contexts, injection into entity listener classes, etc. JPA 2.1 also add support for Type Conversion methods, sometime called Type Converter. This new facility let developers specify methods to convert between the entity attribute representation and the database representation for attributes of basic types. For additional details on Type Conversion, you can check the JSR 338 Specification and its corresponding JPA 2.1 Javadocs. In addition, you can also check those 2 articles. The first article ('How to implement a Type Converter') gives a short overview on Type Conversion while the second article ('How to use a JPA Type Converter to encrypt your data') implements a simple use-case (encrypting data) to illustrate Type Conversion. Mission critical applications would probably rely on transparent database encryption facilities provided by the database but that's not the point here, this use-case is easy enough to illustrate JPA 2.1 Type Conversion.

    Read the article

  • Point me to info about constructing filters (of lists)

    - by jah
    I would like some pointers to information which would help me understand how to go about providing the ability to filter a list of entities by their attributes as well as by attributes of related entities. As an example, imagine a web app which provides order management of some kind. Orders and related entities are stored in a relational database. And imagine that the app has an interface which lists the orders. The problem is: how does one allow the list to be filtered by, for example:- order number (an attribute) line item name (an attribute of a n-n related entity) some text in an administrative note related to the order (text found in an attribute of a 1-1 related entity) I'm trying to discover whether there is something like a standard, efficient way to construct the queries and the filtering form; or some possible strategies; or any theory on the topic; or some example code. My google foo fails me.

    Read the article

  • What are the technial and programming requirements for writing a stealth keylogger?

    - by user970533
    I'm planning to write/code one such stealth keylogger that would bypass detection by a certain antivirus. (I don't want to name the vendor as I know how good Google queries are against StackExchange websites). I don't want to just download any keylogger from internet and try to encode it to evade detection. Writing code myself I would have the ability to make changes as I go; obscuration on both high-level and low-level language. I like control too. It seems naive but is it true that keyloggers are a thing of the past, probably because of how effective AV's have become in detecting such programs? I want some nice points on how can one easily write a robust, effective key logger preferably for a Windows environment?

    Read the article

  • NDepend v4 has just been released!

    - by Vincent Maverick Durano
    Few months ago I blogged about the release of NDepend v3 Continuous Integration and Reporting Capabilities here. Recently, the NDepend team has released v4 which comes with code rules based on C# LINQ queries (CQLinq), this make code ruling so much more powerful and flexible. There are couple of new rules available like: http://www.ndepend.com/DefaultRules/webframe?Q_UI_layer_shouldn't_use_directly_DB_types.html http://www.ndepend.com/DefaultRules/webframe?Q_Types_with_disposable_instance_fields_must_be_disposable.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_the_Singleton_pattern.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_making_complex_methods_even_more_complex_(Source_CC).html v4 also provides NDepend.API and a dozen of open-source code tool developed with NDepend.API (the Power Tools) http://www.ndepend.com/API/webframe.html

    Read the article

  • Is it normal for a software developer to have lots of issues after product went live?

    - by juniordeveloper87
    In recent months, our product (which went live probably 9 months ago) experience an increase in the number of users using it. We faced lots of queries, problems, and complaints from users. Sadly, it seems that a lot of the issues seem to be coming from a module that I have been working on. At times, I wonder if I am incompetent, or is this pretty normal in software development, that bugs are found especially during the initial stages of a software development livecycle after it goes live? I wonder why some issues we faced now are not foreseen by me or the team during development phase. I have been working as a software developer for close to 2 years now. Hope to get your opinions, feedback, advices! Thanks!

    Read the article

  • Google60 Emulates Search Engine Querying with 1960s Technology

    - by Jason Fitzpatrick
    Google60 is a novel little project that mimics the interface of a 1960s-era computer and mashes it up with modern Google search queries. Take it for a spin; you’ll never appreciate the speed of even the slowest modern browser more. While playing with the actual project is enjoyable, make sure to check out the project notes below the interface for an interesting look at design choices and emulating an old machine. Google60 [via Unpluggd] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • How to avoid being an API programmer only?

    - by anything
    I have almost six years of experience in java. I have developed many projects which used frameworks like Struts, Spring, Hibernate, JQuery , DWR, Ajax etc. I have used these technologies in almost all the projects I have worked on. Projects were very simple mostly with crud based apps. My everyday tasks involves creating few screens, writing queries, testing etc. After all these years I feel like I have turned into an API programmer who just uses these above mentioned frameworks which is not giving me any satisfaction of being a programmer. Is this normal or is it just me who is feeling like this?

    Read the article

  • Database Insider - June 2014 issue now available

    - by Javier Puerta
    The June issue of the Database Insider newsletter is now available. (Full newsletter here) NEWS June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance At a live webcast on June 10 at Oracle’s headquarters, Oracle CEO Larry Ellison is expected to announce the upcoming availability of Oracle Database In-Memory, which dramatically accelerates business decision-making by processing analytical queries in memory without requiring any changes to existing applications.Read More New Study Confirms Capital Expenditure Savings with Oracle Multitenant A new study finds that Oracle Multitenant, an option of Oracle Database 12c, drives significant savings in capital expenditures by enabling the consolidation of a large number of databases on the same number or fewer hardware resources.  Read More Read full newsletter here

    Read the article

  • How could I manage Google Adsense to approve my Web App? It keeps denying it

    - by Javierfdr
    Google adsense keeps denying my app from having ads, because of an "insufficient content" issue. I manage a Web Application that allows the users to set Youtube Videos as Alarm Clocks. It includes an in-site Youtube search to retrieve videos from user queries and lists the users alarms. The site has a good traffic (500 users per day), is currently promoted by Google in Google Chrome Webstore, and the ajax requests are crawlable, following Google's guidelines (https://developers.google.com/webmasters/ajax-crawling/). Although I understand there is not much content, beyond the user-generated, I really don't what else should I include in the site. Perhaps adding contact and about pages, and maybe another section would increase the navigation. Google argues I need a "fully launched and functioning site, allowing users to navigate throughout your site with a menu, sitemap, or appropiate links". They also ask for "full sentences or paragraphs" Isn't a Google Adsense solutions for Web Applications? Would all the web-apps have to include useless navigable subpages?

    Read the article

  • How to get experience in large scale databases?

    - by Justin
    I have written applications that are very small scale and the code I write works fine for them. But I have often wondered how the server side code I write would scale up from 100s of queries per day to millions. Also when looking at possible jobs/projects, people are often looking for developers with experience in this sort of high traffic database design so I would at least like to be able to say, I havent gotten to work on a project that was this popular, but I at least have tried to simulate it. Are there tools or frameworks that can generate a lot of traffic or at least simulate what would happen with traffic on different orders of magnitude so I could get some practice writing optimized code for higher traffic applicaitons?

    Read the article

  • JavaOne 2012: JDBC Community Discussion

    - by sowmya
    At JavaOne2012, Mark Biamonte of DataDirect Technologies and Lance Andersen of Oracle organized a discussion about JDBC. To learn more about using JDBC to develop database applications, see the JDBC trail in the Java Tutorials. You will know how to use the basic JDBC API to * create tables * insert values into them * query the tables * retrieve the results of the queries * update the tables In this process, you will learn how to use simple statements and prepared statements, and see an example of a stored procedure. You will also learn how to perform transactions and how to catch exceptions and warnings. - Sowmya

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Data Conversion in SQL Server

    Most of the time, you do not have to worry about implicit conversion in SQL expressions, or when assigning a value to a column. Just occasionally, though, you'll find that data gets truncated, queries run slowly, or comparisons just seem plain wrong. Robert Sheldon explains why you sometimes need to be very careful if you mix data types when manipulating values. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • JavaScript malware analysis

    - by begueradj
    I want to test websites for JavaScript malware presence . I plan to develop a Python program that sends the URL of a given website to a virtual machine where the dynamic execution of the eventual malicious JavaScript embedded in the website's page is monitored. My questions: Should my VM be Windows or Linux ? What if the malware damages my VM: is there a hint how to avoid that ? Or launch a new VM automatically instead ? If I use telnet client library to communicate with the VM: must I implement a server within the VM to deal with my queries or can I overcome this ? I am jut looing for hints, general ideas. Thank you for any help.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >