Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 420/911 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Bind9 forwarding zone not working

    - by JMW
    i've setup a forwarding zone on a RHEL6 Bind server like this: zone "office.local" IN { type forward; forward only; forwarders { 192.168.0.2; 192.168.0.3; }; }; when i try to query using dig @127.0.0.1 monitorsms.office.local i see the following message in the syslog: client 127.0.0.1#39376: query: monitorsms.office.local IN A + (127.0.0.1) validating @0x7ff7640357d0: monitorsms.office.local A: bad cache hit (monitorsms.office.local/DS) google tells me, that there is an issue with DNSSEC, but both servers do not have DNSSEC configured and thus do not send any DNSSEC records. What's wrong with my configuration?

    Read the article

  • Vaadin Calendar events not shown if overnight [migrated]

    - by B_B
    In my vaadin project there is the possibility to create events that are shown by the calendar. It does works, except when the event is overnight, let's say the night from 23th to 24th, and the calendar shows as only day the 24th. In this case the part of the event that belongs to the 24th is supposed to be shown, but it is not. When I switch to weekly view, the event is shown properly. Here is the function where I get the data and use a container for the calendar: /* Fill Calendar from database */ void updateData() { final BeanItemContainer<TypeReservationEvent> container = new BeanItemContainer<TypeReservationEvent>(TypeReservationEvent.class); Map<String, Object> parameters = new HashMap<String, Object>(); parameters.put("roomParent",chosenRoom); String query = "SELECT DISTINCT res FROM EntityReservation res, EntityRoom r, EntityTable rt WHERE res.tableId = rt.id " + "AND rt.roomParent =:roomParent"; reservationList = facade.list(query, parameters); for(EntityReservation rt : reservationList) { container.addBean(new TypeReservationEvent(rt)); } container.sort(new Object[]{"start"}, new boolean[]{true}); cal.setContainerDataSource(container, "caption", "description", "start", "end", "styleName"); // Force calendar to refresh if(selectCalViewType.getValue() == chooseWeeklyView) { setViewType(calViewType.DAILY); setViewType(calViewType.WEEKLY); } else if (selectCalViewType.getValue() == chooseDailyView) { setViewType(calViewType.WEEKLY); setViewType(calViewType.DAILY); } } TIA

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Retrieving database column using JSON [migrated]

    - by arokia
    I have a database consist of 4 columns (id-symbol-name-contractnumber). All 4 columns with their data are being displayed on the user interface using JSON. There is a function which is responisble to add new column to the database e.g (countrycode). The coulmn is added successfully to the database BUT not able to show the new added coulmn in the user interface. Below is my code that is displaying the columns. Can you help me? table.php $(document).ready(function () { // prepare the data var theme = getDemoTheme(); var source = { datatype: "json", datafields: [ { name: 'id' }, { name: 'symbol' }, { name: 'name' }, { name: 'contractnumber' } ], url: 'data.php', filter: function() { // update the grid and send a request to the server. $("#jqxgrid").jqxGrid('updatebounddata', 'filter'); }, cache: false }; var dataAdapter = new $.jqx.dataAdapter(source); // initialize jqxGrid $("#jqxgrid").jqxGrid( { source: dataAdapter, width: 670, theme: theme, showfilterrow: true, filterable: true, columns: [ { text: 'id', datafield: 'id', width: 200 }, { text: 'symbol', datafield: 'symbol', width: 200 }, { text: 'name', datafield: 'name', width: 100 }, { text: 'contractnumber', filtertype: 'list', datafield: 'contractnumber' } ] }); }); data.php <?php #Include the db.php file include('db.php'); $query = "SELECT * FROM pricelist"; $result = mysql_query($query) or die("SQL Error 1: " . mysql_error()); $orders = array(); // get data and store in a json array while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $pricelist[] = array( 'id' => $row['id'], 'symbol' => $row['symbol'], 'name' => $row['name'], 'contractnumber' => $row['contractnumber'] ); } echo json_encode($pricelist); ?>

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, I am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now I get this error when I try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do I need to change in order my win xp join the domain

    Read the article

  • How to combine RewriteRule of index.php and queries rewrite and avoid Server Error 404?

    - by Binyamin
    Both RewriteRule's works fine, except when used together. 1.Remove all queries except query ?callback=.*: # /api?callback=foo has no rewrite # /whatever?whatever=foo has 301 redirect /whatever RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] 2.Rewrite index.php queries api and url=$1: # /api returns data index.php?api&url= # /api/whatever returns data index.php?api&url=whatever RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L] Any valid combination to this RewriteRule's on keeping its functionality? This combination will return Server Error 404 to /api/?callback=foo: # Remove all queries except query "callback" RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] # Rewrite index.php queries RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* # Server Error 404 on /api/?callback=foo and /api/whatever?callback=foo RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L]

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Gave a talk at SoCal Code Camp at USC today titled “Linq to Objects A-Z”

    - by dotneteer
    I gave a talk at SoCal Code Camp on Linq to Objects. With careful categorization of Linq functions, I was able to cover the entire set of Linq functions in only 35 minutes. I was able to spend the rest time on demos. In my first demo, I show I was able to write a top 20 URL type of query using 4 lines of library code and 9 line of Linq code without tools like Log Parser. I also demonstrated that I only need to change 2 lines of code from querying a single log file to a whole directory of log files. It would be as simple to run the query against multiple servers in parallel. In my second demo, I discussed how to turn into graph depth-first-search (DFS) and breath-first-search (BFS) in the a Linq queryable problem. The class LingToGraph contains the only DFS and BFS code I ever have to write; the rest could be done the the lambda passed to the DFS or BFS calls. In future blogs, I will provide more details explanation of code. Links: Link to Powerpoint slides. Link to demos.

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • Long 'Wait' Time for three php/CSS files. Is something blocking them?

    - by William Pitcher
    I have been speed optimizing a Wordpress site to little effect. There are three files CSS-related php files from the Wordpress theme that are delaying page loads on the site. One of the three files is basically one line of custom CSS from the custom CSS feature in the theme. You can see what I am talking about with this Pingdom speed test: The yellow is 'Wait'. There are no slow items in the cut-off portion of the image. The full results are here: Pingdom Results Page 1. Any thoughts on what might be causing this? I understand that I have blocking CSS or JS files, but I don't see anything that would be causing that long of a wait. When I ran the P3 Plugin Profiler, Wordpress and all plugins appeared fine -- it is the theme that is taking all the time. GTmetrix recommends avoiding dynamic queries. I assume all the ver=3.61 references are to the version of Wordpress (which I am using). I noticed that my Wordpress sites using other themes don't make this query (at least not over and over). 2. Is this typical coding practice? 3. How much negative impact do these query-strings have -- a little or a lot? I tried searching for similar questions here, please excuse me if I missed something. Sometimes, I know just enough to be dangerous.

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • Entity communication: Message queue vs Publish/Subscribe vs Signal/Slots

    - by deft_code
    How do game engine entities communicate? Two use cases: How would entity_A send a take-damage message to entity_B? How would entity_A query entity_B's HP? Here's what I've encountered so far: Message queue entity_A creates a take-damage message and posts it to entity_B's message queue. entity_A creates a query-hp message and posts it to entity_B. entity_B in return creates an response-hp message and posts it to entity_A. Publish/Subscribe entity_B subscribes to take-damage messages (possibly with some preemptive filtering so only relevant message are delivered). entity_A produces take-damage message that references entity_B. entity_A subscribes to update-hp messages (possibly filtered). Every frame entity_B broadcasts update-hp messages. Signal/Slots ??? entity_A connects an update-hp slot to entity_B's update-hp signal. Something better? Do I have a correct understanding of how these communication schemes would tie into a game engine's entity system? How do entities in commercial game engines communicate?

    Read the article

  • How to tell if any MySQL connections has been dropped or timed out?

    - by Continuation
    A client is using PHP to connect to MySQL. The PHP scripts and the MySQL database are located on 2 different Linux servers. He complained that database connections were being dropped or timed out and asked me to take a look. Is there any place in MySQL that can show me what and how many connections have been dropped or timed out? I looked into slow query log and didn't see anything. Any suggestions on how to diagnose this dropped/timed out database connection problem? Thanks EDIT: Slow query log is enabled in my.cnf: log-slow-queries=/var/log/mysql-slow-queries.log And when I do a mysql> show global status; I got: | Slow_queries | 11402347 | So there are a lot of slow queries. But the file /var/log/mysql-slow-queries.log doesn't exist. Why is that?

    Read the article

  • EXEC() syntax error using ODBC

    - by Mike Trader
    I have written a little ETL application that I wish to run a few lines of TSQL from. If i enter a simple query like "SELECT * FROM MyTable" everything is fine. All single line commands run as expected. A multiline query like this is also fine: DECLARE @TableName NVARCHAR(MAX) set @TableName = 'MyTable' EXECute ( 'DROP TABLE '+ @TableName ) Howevery when I try and run: DECLARE @TableName NVARCHAR(MAX) OPEN Tables FETCH NEXT FROM Tables INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN EXEC( 'DROP TABLE ' + @TableName ) FETCH NEXT FROM Tables INTO @TableName END I get a syntax error after TABLE in the EXEC() call. I have spent 6 hours trying to figure this out thinking perhaps I need to escape the single quote or something. I just cannot see the problem. A set of fresh eyes would be appreciated.

    Read the article

  • BIND unstable with DLZ+MySQL on Ubuntu 9.10, any ideas?

    - by Chris
    My BIND server keeps dropping out and I can't work out why. Here is some info from the syslog that I think pertains to the failure(s): Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:12:17 dnsdebug kernel: [285552.573949] type=1503 audit(1271963537.759:53): operation="open" pid=6618 parent=1 profile="/usr/sbin/named" requested_mask="::rw" denied_mask="::rw" fsuid=107 ouid=0 name="/dev/tty" Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:13:17 dnsdebug named[6613]: last message repeated 7 times Any ideas? Mysql had a segfault sometimes, but that seems to be no longer an issue. It's the 64bit version of ubuntu too. Sometimes it will return records just fine, other times it appears to just randomly go down.

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • help with yes/no radio button for multiple questions pull from database

    - by Darlene
    hey guys i need a little help with this questionaire form. The tables currently using are: user userid| username answers aswerid|quesid|ans|userid|date ques quesid|ques The form below is what im using however i'm gettin errors for radio button.... could anyone offer advice? $query = mysql_query("SELECT * FROM ques", $con) or die("Cannot Access tblprequeations From Server"); echo"<div id='quesform' class='quesform'>"; echo"<form name='QForm' method='post' action='answers.php' onsubmit='return validateQForm(this);'>"; echo"<p>"; while($row = mysql_fetch_array($query)) { echo"<p>"; echo"<label>".$row['quesid']."</label>&nbsp; &nbsp;"; echo"<label>".$row['ques']."</label>&nbsp; &nbsp;"; echo"<input type='radio' name='ans' value='yes' if (isset($_POST['ans']) && $_POST['ans'] == 'yes') echo'checked'/>"; echo"<input type='radio' name='ans' value='no' if (isset($_POST['ans']) && $_POST['ans'] == 'no') echo'checked'/>"; echo"</p>"; } echo"</p>";

    Read the article

  • htacces rewrite condition old site to new site with querystring

    - by Brandon Braner
    I am not even going to pretend to fully understand how htaccess rewrite conidtions work. Ive been working on this for a while searching and searching. I have an old wordpress site www.old-site.com and a new site www.site.com wordpress uses query strings page_id=# to redirect to pages on the old site page_id=2 went to a specific page but on the new site it goes the the home page i need old-site/?page_id=2 to go to site.com/our-company here is what i am trying RewriteCond %{HTTP_HOST} ^(www.)?old-site.com$ [NC] RewriteCond %{QUERY_STRING} ^page_id=2$ RewriteRule ^(.*)$ http://www.site.com/our-company/ [R=301,L] if i take out the rewrite conditio for query string it redirects all traffic from old-site.com to the our company page on the new site. where am i going wrong? i have about 15 redirects i need to do this way. thanks in advance

    Read the article

  • solr administration

    - by devrick0
    Does anyone have any notes for an sysadmin supporting solr? I'm looking for anything that might be useful for monitoring & metrics as well as troubleshooting. Some useful links I have found are: /solr/admin/stats.jsp and /solr/admin/analysis.jsp In the logs I have noticed, other than the query, "hits", "status" and "QTime" values. The documentation on what these mean is sparse at least based on the 100+ websites I have checked. QTime appears to be the query time response in milliseconds. Hits is some form of results but I'm not sure exactly what makes that up and I'm not sure about status. Typically I see status come back as "0" but I have seen other numbers such as "5", so my thoughts that it could be either HTTP status codes or a 0 or 1 (good or bad) methodology isn't accurate. All of the documentation I have come across is intended for developers. Any sysadmin-centric documentation would be a big help.

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >