Search Results

Search found 7359 results on 295 pages for 'triple channel ram'.

Page 73/295 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • Is it possible to force the WCF test client to accept a self-signed certificate?

    - by Lawrence Johnston
    I have a WCF web service running in IIS 7 using a self-signed certificate (it's a proof of concept to make sure this is the route I want to go). It's required to use SSL. Is it possible to use the WCF Test Client to debug this service without needing a non-self-signed certificate? When I try I get this error: Error: Cannot obtain Metadata from https:///Service1.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https:///Service1.svc Metadata contains a reference that cannot be resolved: 'https:///Service1.svc'. Could not establish trust relationship for the SSL/TLS secure channel with authority ''. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.HTTP GET Error URI: https:///Service1.svc There was an error downloading 'https:///Service1.svc'. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.

    Read the article

  • Mixing NIO with IO

    - by Steffen Heil
    Hi Usually you have a single bound tcp port and several connections on these. At least there are usually more connections as bound ports. My case is different: I want to bind a lot of ports and usually have no (or at least very few) connections. So I want to use NIO to accept the incoming connections. However, I need to pass the accepted connections to the existing jsch ssh library. That requires IO sockets instead of NIO sockets, it spawns one (or two) thread(s) per connection. But that's fine for me. Now, I thought that the following lines would deliver the very same result: Socket a = serverSocketChannel.accept().socket(); Socket b = serverSocketChannel.socket().accep(); SocketChannel channel = serverSocketChannel.accpet(); channel.configureBlocking( true ); Socket c = channel.socket(); Socket d = serverSocket.accept(); However the getInputStream() and getOutputStream() functions of the returned sockets seem to work different. Only if the socket was accepted using the last call, jsch can work with it. In the first three cases, it fails (and I am sorry: I don't know why). So is there a way to convert such a socket? Regards, Steffen

    Read the article

  • Update PEAR on MAMP MacOsX

    - by Jevgeni Smirnov
    Current I am trying to install phpunit on my mac os x and mamp server: pear config-set auto_discover 1 pear install pear.phpunit.de/PHPUnit Errors which I got during installation: Validation Error: This package.xml requires PEAR version 1.9.4 to parse properly, we are version 1.9.2 pear upgrade pear Nothing to upgrade UPDATE 1 This is my pear config. I assume that I messed up local and mamp installs(I didn't know that mamp also has pear, so I installed local one). I suppose something wrong with bin_dir, php_dir and other paths? Keefir-Samolet-iMac:MAMP jevgenismirnov$ pear config-show Configuration (channel pear.php.net): ===================================== Auto-discover new Channels auto_discover 1 Default Channel default_channel pear.php.net HTTP Proxy Server Address http_proxy PEAR server [DEPRECATED] master_server pear.php.net Default Channel Mirror preferred_mirror pear.php.net Remote Configuration File remote_config PEAR executables directory bin_dir /Users/jevgenismirnov/pear/bin PEAR documentation directory doc_dir /Users/jevgenismirnov/pear/docs PHP extension directory ext_dir /Applications/MAMP/bin/php/php5.3.6/lib/php/extensions/no-debug-non-zts-20090626/ PEAR directory php_dir /Users/jevgenismirnov/pear/share/pear PEAR Installer cache directory cache_dir /var/folders/k7/xpwbcbrs1xs8tlxjk5mvkwrr0000gp/T//pear/cache PEAR configuration file cfg_dir /Users/jevgenismirnov/pear/cfg directory PEAR data directory data_dir /Users/jevgenismirnov/pear/data PEAR Installer download download_dir /tmp/pear/install directory PHP CLI/CGI binary php_bin /Applications/MAMP/bin/php/php5.3.6/bin/php php.ini location php_ini --program-prefix passed to php_prefix PHP's ./configure --program-suffix passed to php_suffix PHP's ./configure PEAR Installer temp directory temp_dir /tmp/pear/install PEAR test directory test_dir /Users/jevgenismirnov/pear/tests PEAR www files directory www_dir /Users/jevgenismirnov/pear/www Cache TimeToLive cache_ttl 3600 Preferred Package State preferred_state stable Unix file mask umask 22 Debug Log Level verbose 1 PEAR password (for password maintainers) Signature Handling Program sig_bin /usr/local/bin/gpg Signature Key Directory sig_keydir /Applications/MAMP/bin/php/php5.3.6/conf/pearkeys Signature Key Id sig_keyid Package Signature Type sig_type gpg PEAR username (for username maintainers) User Configuration File Filename /Users/jevgenismirnov/.pearrc System Configuration File Filename /Applications/MAMP/bin/php/php5.3.6/conf/pear.conf

    Read the article

  • How does GMail implement Comet?

    - by Morgan Cheng
    With the help of HttpWatch, I tried to figure out how GMail implement Comet. I Login in GMail with two account, one in IE and the other in Firefox. Chatting in GTalk in GMail with some magic words like "WASSUP". Then, I logoff both GMail accounts, filter any http content without "WASSUP" string. The result shows which HTTP request is the streaming channel. (Note: I have to logoff. Otherwise, never-ending HTTP would not show content in HttpWatch.) The result is interesting. The URL for stream channel is like: https://mail/channel/bind?VER=8&at=xn3j33vcvk39lkfq..... There is no surprise that GMail do Comet in IE with IFRAME. The Http content starts with " Originally, I guessed that GMail do Comet in Firefox with multipart XmlHttpRequest. To my surprise, the response header doesn't have "multipart/x-mixed-replace" header. The response headers are as below: HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: Fri, 01 Jan 1990 00:00:00 GMT Date: Sat, 20 Mar 2010 01:52:39 GMT X-Frame-Options: ALLOWALL Transfer-Encoding: chunked X-Content-Type-Options: nosniff Server: GSE X-XSS-Protection: 0 Unfortunately, the HttpWatch doesn't tell whether a HTTP request is from XmlHttpRequest or not. The content is not HTML but JSON. It looks like a response for XHR, but that would not work for Comet without multipart/x-mixed-replace, right? Is there any way else to figure out how GMail implement Comet? Thanks.

    Read the article

  • How to implement a web app with blazeds+java+flex+tomcat?

    - by ARYAD
    Hi, i'm doing a web app in flex blazeds and java, i installed the eclipse plugs for using WTP mixed project, i use the flex's server that uses an emulate of tomcat when i ran my flex service the web app got the datas, everythings is ok. the problem is when i copy the proyect with all files generated by flex in my tomcat or the blazeds's tomcat, it doesn't work, this is becasue i want to implement my app on a server the error is: "(mx.messaging.messages::ErrorMessage)#0 body = (Object)#1 clientId = (null) correlationId = "B425A2A7-7D12-A982-7779-8CCBF669413C" destination = "" extendedData = (null) faultCode = "Client.Error.MessageSend" faultDetail = "Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Failed: url: 'http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf'" faultString = "Send failed" headers = (Object)#2 messageId = "1CBC6020-0ED8-C4CC-3B77-8CCBF6D6621D" rootCause = (mx.messaging.events::ChannelFaultEvent)#3 bubbles = false cancelable = false channel = (mx.messaging.channels::AMFChannel)#4 authenticated = false channelSets = (Array)#5 [0] (mx.messaging::ChannelSet)#6 authenticated = false channelIds = (Array)#7 [0] "my-amf" channels = (Array)#8 [0] (mx.messaging.channels::AMFChannel)#4 clustered = false connected = false currentChannel = (mx.messaging.channels::AMFChannel)#4 initialDestinationId = (null) messageAgents = (Array)#9 [0] (mx.rpc::AsyncRequest)#10 authenticated = false autoConnect = true channelSet = (mx.messaging::ChannelSet)#6 clientId = (null) connected = false defaultHeaders = (null) destination = "ADEscenario" id = "7D92EDF2-CF62-9545-BA11-8CCBF6691E6B" reconnectAttempts = 0 reconnectInterval = 0 requestTimeout = -1 subtopic = "" connected = false connectTimeout = -1 enableSmallMessages = true endpoint = "http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf" failoverURIs = (Array)#11 id = "my-amf" mpiEnabled = false netConnection = (flash.net::NetConnection)#12 client = (mx.messaging.channels::AMFChannel)#4 connected = false objectEncoding = 3 proxyType = "none" uri = "http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf" piggybackingEnabled = false polling = false pollingEnabled = true pollingInterval = 3000 protocol = "http" reconnecting = false recordMessageSizes = false recordMessageTimes = false requestTimeout = -1 uri = "http://{server.name}:{server.port}/IEC-BLAZEDS/messagebroker/amf" url = "http://{server.name}:{server.port}/IEC-BLAZEDS/messagebroker/amf" useSmallMessages = false channelId = "my-amf" connected = false currentTarget = (mx.messaging.channels::AMFChannel)#4 eventPhase = 2 faultCode = "Channel.Connect.Failed" faultDetail = "NetConnection.Call.Failed: HTTP: Failed: url: 'http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf'" faultString = "error" reconnecting = false rejected = false rootCause = (Object)#13 code = "NetConnection.Call.Failed" description = "HTTP: Failed" details = "http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf" level = "error" target = (mx.messaging.channels::AMFChannel)#4 type = "channelFault" timestamp = 0 timeToLive = 0" i don't know why tomcat doesn't find the class of flex.messaging.endpoints.AMFEndpoint that is used for my-amf 'http://172.16.8.245:8400/IEC-BLAZEDS/messagebroker/amf'. all works well in the emulated server that flex has.

    Read the article

  • Requested Service not found - .NET Remoting

    - by bharat
    I am Getting this exception System.Runtime.Remoting.RemotingException occurred Message="Object '/55337266_9751_4f58_8446_c54ff254222e/rkutlpt5hvsxipmzhb+jkqyl_98.rem' has been disconnected or does not exist at the server." Source="mscorlib" StackTrace: Server stack trace: at System.Runtime.Remoting.Channels.ChannelServices.CheckDisconnectedOrCreateWellKnownObject(IMessage msg) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Common.Interface.Repository.NA.INRR.GetNRun(Int32 NetworkRunId) at Module.NA.ViewModel.NRDViewModel.get_AS() InnerException: Most of the times i am getting this exception this is my remotingDomain method string tcpURL; TcpChannel channel; channel = new TcpChannel(); ChannelServices.RegisterChannel(channel, false); //-- // Remote domain objects //-- tcpURL = string.Format("tcp://{0}:{1}/DomainComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(DomainComposition), tcpURL); //-- // Remote repository objects //-- tcpURL = string.Format("tcp://{0}:{1}/RepositoryComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(RepositoryComposition), tcpURL); //-- // Remote utility objects //-- tcpURL = string.Format("tcp://{0}:{1}/UtilityComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(UtilityComposition), tcpURL); this.Domain = new DomainComposition(); this.Repository = new RepositoryComposition(); this.Utility = new UtilityComposition(); How to Check if the object is disconnect then re initiate the service

    Read the article

  • RSS feed generated by SharePoint has a stylesheet tag and how to remove that

    - by iHeartDucks
    The feed which SharePoint Generates is here (I copied it to pastie because I thought it would be clear there) However, the xml file comes with a style sheet tag. How do I remove that? Does SharePoint always generate that? Due to the presence of that tag, I am unable to apply another style sheet of my own using the XML WebPart. EDIT: I don't think the issue is related to the style sheet. If I copy the xml and paste it in the "Xml Editor" of the Web Part everything works just fine. If I provide the URL, that is when I do not see any data. This is my XSL file <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" exclude-result-prefixes="x d ddwrt xsl msxsl" xmlns:x="http://www.w3.org/2001/XMLSchema" xmlns:d="http://schemas.microsoft.com/sharepoint/dsp" xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt"> <xsl:output method="html" version="1.0" encoding="iso-8859-1" indent="yes"/> <xsl:template match="/"> <xsl:value-of select="count(rss)" /> <xsl:value-of select="count(rss/channel)" /> <xsl:value-of select="count(rss/channel/item)" /> <xsl:for-each select="rss/channel/item"> <xsl:value-of select="title" /> </xsl:for-each> </xsl:template> </xsl:stylesheet> Pastie link

    Read the article

  • Unable to upgrade PEAR from 1.9.2 to 1.9.4

    - by user940768
    I am on a Ubuntu 11.10 and trying to upgrade from 1.9.2 to 1.9.4, but it simply don't work. Here are the commands I am following in sequence $ sudo apt-get install php-pear Reading package lists... Done Building dependency tree Reading state information... Done php-pear is already the newest version. The following packages were automatically installed and are no longer required: linux-headers-3.0.0-14-generic-pae libaccess-bridge-java-jni libaccess-bridge-java Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. $ sudo pear channel-update pear.php.net Updating channel "pear.php.net" Channel "pear.php.net" is up to date $ sudo pear upgrade-all Nothing to upgrade-all $ sudo pear install –alldeps pear.phpunit.de/PHPUnit parsePackageName(): invalid package name "–alldeps" in "–alldeps" invalid package name/package file "–alldeps" Did not download optional dependencies: phpunit/PHP_Invoker, use --alldeps to download automatically phpunit/PHPUnit requires PEAR Installer (version >= 1.9.4), installed version is 1.9.2 phpunit/PHPUnit can optionally use package "phpunit/PHP_Invoker" (version >= 1.1.0) phpunit/Text_Template requires PEAR Installer (version >= 1.9.4), installed version is 1.9.2 phpunit/PHP_CodeCoverage requires PEAR Installer (version >= 1.9.4), installed version is 1.9.2 phpunit/PHP_CodeCoverage requires package "phpunit/Text_Template" (version >= 1.1.1) phpunit/PHP_CodeCoverage can optionally use PHP extension "xdebug" (version >= 2.0.5) phpunit/PHPUnit_MockObject requires PEAR Installer (version >= 1.9.4), installed version is 1.9.2 phpunit/PHPUnit_MockObject requires package "phpunit/Text_Template" (version >= 1.1.1) phpunit/PHP_TokenStream requires PEAR Installer (version >= 1.9.4), installed version is 1.9.2 No valid packages found install failed Any thoughts?

    Read the article

  • realtime visitors with nodejs & redis & socket.io & php

    - by orhan bengisu
    I am new to these tecnologies. I want to get realtime visitor for each products for my site. I mean a notify like "X users seeing this product". Whenever an user connects to a product counter will be increased for this product and when disconnects counter will be decreased just for this product. I tried to search a lots of documents but i got confused. I am using Predis Library for PHP. What i have done may totaly be wrong. I am not sure Where to put createClient , When to subscribe & When to unsubscribe. What I have done yet: On product detail page: $key = "product_views_".$product_id; $counter = $redis->incr($key); $redis->publish("productCounter", json_encode(array("product_id"=> "1000", "counter"=> $counter ))); In app.js var app = require('express')() , server = require('http').createServer(app) , socket = require('socket.io').listen(server,{ log: false }) , url = require('url') , http= require('http') , qs = require('querystring') ,redis = require("redis"); var connected_socket = 0; server.listen(8080); var productCounter = redis.createClient(); productCounter.subscribe("productCounter"); productCounter.on("message", function(channel, message){ console.log("%s, the message : %s", channel, message); io.sockets.emit(channel,message); } productCounter.on("unsubscribe", function(){ //I think to decrease counter here, Shold I? and How? } io.sockets.on('connection', function (socket) { connected_socket++; socket_id = socket.id; console.log("["+socket_id+"] connected"); socket.on('disconnect', function (socket) { connected_socket--; console.log("Client disconnected"); productCounter.unsubscribe("productCounter"); }); }) Thanks a lot for your answers!

    Read the article

  • parsing xml using dom4j

    - by D3GAN
    My XML structure is like this: <rss> <channel> <yweather:location city="Paris" region="" country="France"/> <yweather:units temperature="C" distance="km" pressure="mb" speed="km/h"/> <yweather:wind chill="-1" direction="40" speed="11.27"/> <yweather:atmosphere humidity="87" visibility="9.99" pressure="1015.92" rising="0"/> <yweather:astronomy sunrise="8:30 am" sunset="4:54 pm"/> </channel> </rss> when I tried to parse it using dom4j SAXReader xmlReader = createXmlReader(); Document doc = null; doc = xmlReader.read( inputStream );//inputStream is input of function log.info(doc.valueOf("/rss/channel/yweather:location/@city")); private SAXReader createXmlReader() { Map<String,String> uris = new HashMap<String,String>(); uris.put( "yweather", "http://xml.weather.yahoo.com/ns/rss/1.0" ); uris.put( "geo", "http://www.w3.org/2003/01/geo/wgs84_pos#" ); DocumentFactory factory = new DocumentFactory(); factory.setXPathNamespaceURIs( uris ); SAXReader xmlReader = new SAXReader(); xmlReader.setDocumentFactory( factory ); return xmlReader; } But I got nothing in cmd but when I print doc.asXML(), my XML structure print correctly!

    Read the article

  • How many WCF connections can a single host handle?

    - by mafutrct
    I'll try to explain this with an example. I'm writing a chat application. There are users that can join chat rooms. A user has to log in before he can join any room. Currently, there is a single service. A user logs in using this service. Then, the user sends and receives messages for all joined rooms via this single service. channel.Login("Hans Moleman", "password"); channel.JoinRoom("name of room"); channel.SendChat("name of room", "hello"); I'm thinking about changing the design so there is a new WCF connection for each joined room. In the actual app, the number of connections is likely going to be in the range of 10-100, possibly more. Is this a good idea? Or are ~100 connections per client too much? The server should be able to handle many clients (range 100-1000, later up to 10k). In case it matters, I'm using NetTcpBinding.

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • How to automatically show Title of the Entries/Articles in the Browser Title Bar in ExpressionEngine 2?

    - by Ibn Saeed
    How would I output the title of an entry in ExpressionEngine and display it in the browser's title bar? Here is the content of my page's header: <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test Site</title> <link rel="stylesheet" href="{stylesheet=site/site_css}" type="text/css" media="screen" /> </head> What I need is for each page to display the title of the entry in my browser's title bar — how can I achieve that? Part of UPDATED Code: Here is how i have done it : {exp:channel:entries channel="news_articles" status="open|Featured Top Story|Top Story" limit="1" disable="member_data|trackbacks|pagination"} {embed="includes/document_header" page_title=" | {title}"} <body class="home"> <div id="layoutWrapper"> {embed="includes/masthead_navigation"} <div id="content"> <div id="article"> <img src="{article_image}" alt="News Article Image" /> <h4>{title}</h4> <h5><span class="by">By</span> {article_author}</h5> <p>{entry_date format="%M %d, %Y"} -- Updated {gmt_edit_date format="%M %d, %Y"}</p> {article_body} {/exp:channel:entries} </div> What do you think?

    Read the article

  • simplexml object node iteration

    - by MorbidWrath
    I have an xml file that I'm parsing with php's simplexml, but I'm having an issue with an iteration through nodes. the xml: <channel> <item> <title>Title1</title> <category>Cat1</category> </item> <item> <title>Title2</title> <category>Cat1</category> </item> <item> <title>Title3</title> <category>Cat2</category> </item> </channel> my counting function: public function cat_count($cat){ $count = 0; $items = $this->xml->channel->item; $size = count($size); for($i=0;$i<$size;$i++){ if($items[$i]->category == $cat){ $count++; } } return $count; } Am I overlooking an error in my code, or is there another preferred method for iterating through the nodes? I've also used a foreach and while statement with no luck, so I'm at a loss. Any suggestions?

    Read the article

  • Trouble with go tour crawler exercise

    - by David Mason
    I'm going through the go tour and I feel like I have a pretty good understanding of the language except for concurrency. On slide 71 there is an exercise that asks the reader to parallelize a web crawler (and to make it not cover repeats but I haven't gotten there yet.) Here is what I have so far: func Crawl(url string, depth int, fetcher Fetcher, ch chan string) { if depth <= 0 { return } body, urls, err := fetcher.Fetch(url) if err != nil { ch <- fmt.Sprintln(err) return } ch <- fmt.Sprintf("found: %s %q\n", url, body) for _, u := range urls { go Crawl(u, depth-1, fetcher, ch) } } func main() { ch := make(chan string, 100) go Crawl("http://golang.org/", 4, fetcher, ch) for i := range ch { fmt.Println(i) } } The issue I have is where to put the close(ch) call. If I put a defer close(ch) somewhere in the Crawl method, then I end up writing to a closed channel in one of the spawned goroutines, since the method will finish execution before the spawned goroutines do. If I omit the call to close(ch), as is shown in my example code, the program deadlocks after all the goroutines finish executing but the main thread is still waiting on the channel in the for loop since the channel was never closed.

    Read the article

  • OpenCV compare two images and get different pixels

    - by Richard Knop
    For some reason the code bellow is not working. I have two 640*480 images which are very similar but not the same (at least few hundred/thousand pixels should be different). This is how I am comparing them and counting different pixels: unsigned char* row; unsigned char* row2; int count = 0; // this happens in a loop // fIplImageHeader is current image // lastFIplImageHeader is image from previous iteration if ( NULL != lastFIplImageHeader->imageData ) { for( int y = 0; y < fIplImageHeader->height; y++ ) { row = &CV_IMAGE_ELEM( fIplImageHeader, unsigned char, y, 0 ); row2 = &CV_IMAGE_ELEM( lastFIplImageHeader, unsigned char, y, 0 ); for( int x = 0; x < fIplImageHeader->width*fIplImageHeader->nChannels; x += fIplImageHeader->nChannels ) { if (row[x] == row2[x]) // the pixel in the first channel (usually G) { count++; } if (row[x+1] == row2[x+1]) // ... second channel (usually B) { count++; } if (row[x+2] == row2[x+2]) // ... third channel (usually R) { count++; } } } } Now at the end I get number 3626 which would seem alright. But, I tried opening one of the images in MS Paint and drawing thick red lines all over it which should increase the number of different pixels substantially. I got the same number again: 3626. Obviously I am doing something wrong here. I am comparing these images in a loop. This line is before the loop: IplImage* lastFIplImageHeader = cvCreateImageHeader(cvSize(640, 480), 8, 3); Then inside the loop I load images like this: IplImage* fIplImageHeader = cvLoadImage( filePath.c_str() ); // here I compare the pixels (the first code snippet) lastFIplImageHeader->imageData = fIplImageHeader->imageData; So lastFIplImageHeader is storing the image from the previous iteration and fIplImageHeader is storing the current image.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Redis - Records Fall Off

    - by Ian
    With memcache, when you exceed the available ram, it automatically drops the oldest records off the end of the stack.. Is there a way to do this with redis? I'm trying to find ways to avoid running in to a write error (when there's no more available ram), other than setting a timeout. The only reason the timeout isn't useful, it because it doesn't guaranty the ability to write.

    Read the article

  • Caching Mysql database for better performance

    - by kobey
    Hi, I'm using Amazon cloud and I've performance issue since the HDD is not located on my machine. My database is small (~500MB) and I can afford to keep it all in my RAM. I do not want to keep queries in my RAM, i need all the tables there. How can i do it? Thanks, Koby P.S. I'm using ubuntu server...

    Read the article

  • sql server 2008 takes alot of memory?

    - by Ahmed Said
    I making stress test on my database which is hosted on sqlserver 2008 64bit running on 64bit machine 10 GB of RAM. I have 400 threads each thread query the database for every second but the query time does not take time as the sql profiler says that, but after 18 hours sql takes 7.2 GB RAM and 7.2 on virtual memroy. Does is this normal behavior? and how can I adjust sql to clean up not in use memory?

    Read the article

  • Remove newlines and spaces

    - by Cosmin
    How can I remove newline between <table> .... </table> and add \n after each ex: <table border="0" cellspacing="0" cellpadding="0" width="450" class="descriptiontable"><tr> <td width="50%" valign="top"> <span class="displayb">Model Procesor:</span> Intel Celeron<br><span class="displayb">Frecventa procesor (MHz):</span> 2660<br><span class="displayb">Placa Video:</span> Intel Extreme Graphics 2<br><span class="displayb">Retea integrata:</span> 10/100Mbps, RJ-45<br><span class="displayb">Chipset:</span> Intel 845G<br> </td> <td width="50%" valign="top"> <span class="displayb">Capacitate RAM (MB):</span> 512<br><span class="displayb">Tip RAM:</span> DDR<br> </td> </tr></table> and become : <table border="0" cellspacing="0" cellpadding="0" width="450" class="descriptiontable"><tr><td width="50%" valign="top"><span class="displayb">Model Procesor:</span> Intel Celeron<br><span class="displayb">Frecventa procesor (MHz):</span> 2660<br><span class="displayb">Placa Video:</span> Intel Extreme Graphics 2<br><span class="displayb">Retea integrata:</span> 10/100Mbps, RJ-45<br><span class="displayb">Chipset:</span> Intel 845G<br></td><td width="50%" valign="top"><span class="displayb">Capacitate RAM (MB):</span> 512<br><span class="displayb">Tip RAM:</span> DDR<br></td></tr></table>\n s.

    Read the article

  • sql "Group By" and "Having"

    - by Hans Rudel
    im trying to work through some questions and im not sure how to do the following Q:Find the hard drive sizes that are equal among two or more PCs. its q15 on this site http://www.sql-ex.ru/learn_exercises.php#answer_ref The database scheme consists of four tables: Product(maker, model, type) PC(code, model, speed, ram, hd, cd, price) Laptop(code, model, speed, ram, hd, screen, price) Printer(code, model, color, type, price) any pointers would be appreciated.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >