Search Results

Search found 5329 results on 214 pages for 'max williams'.

Page 62/214 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • which grabber is good enough to get 1000fps?

    - by user261002
    I have two framegrabber with a fast camera (1800+ fps). can anybody who understand the hardware, explain to me which of the following grabbers can help me more to grab 1000fps ? here are the the features of the two grabbers : Inspecta-5 Full Camera Link® Version: · Support for line scan and area cameras. · Video data rate of up to 660 Mbytes/sec. · PCI – X bus interface for 64 Bit data width and 66 MHz clock frequency. · PCI bus interface for 32 Bit data width and 33 MHz clock frequency. · 2 Gigabyte Onboard Memory for fast video streams. · Four opt coupled input- output ports for external trigger and encoder signals. · 528 Mbytes/sec. maximum data rate on the PCI–X Bus. · SDK for Windows 2000/XP SILICONSOFTWARE V-Series Camera Link : “microEnable IV VD4-CL” · Camera Pixel Clock Support 85 MHz · Area Scan Cameras 32k * 64k max. image size · Line Scan Cameras 64k max. image width · Acquisition Buffer: 512 MB DDR-RAM · Sustainable Transfer Rate (max.) 850 MBytes/sec. · microEnable SDK for Windows XP/Vista/ 7/ Linux

    Read the article

  • Pivot Table grand total across columns

    - by Jon
    I'm using Excel 2010 and Power Pivot. I'm trying to calculate confidence and velocity for a development team. I'm extracting some information from our time and defect system each day and building a data set. What I need to do with Excel is do the calculations. So each day I add to my data set 1 row per task in the current project, estimate for that task and the time spent on that task. What I want to calculate is the estimate/actual for each task but also for each person. The trouble is that each day the actual is cumulative so I need to pick out the maximum value for each task. The estimate should remain unchanged. I can make this work at the task level with a calculated measure (=MAX(worked)/MAX(estimate)) but I don't know how to total this up for a person. I need the sum of the max worked for each task. So a dataset might look like: Name Task Estimate Worked N1 T1 3 1 N2 T2 3 1 N3 T3 4 1 N1 T1 3 2 N2 T4 5 1 N3 T3 4 2 N1 T5 1 2 N2 T6 2 3 N3 T7 3 2 What I want to see is for task T1 2 days were worked against an estimate of 3 days - so 2/3. For person N1 I want to see that they worked a total of 4 days against an estimate of 4 days so 4/4. For person N2 they worked 5 days for an estimate of 10 days. Any ideas on how I can achieve this?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • GWB | 30 Posts in 60 Days Update

    - by Staff of Geeks
    One month after the contest started, we definitely have some leaders and one blogger who has reached the mark.  Keep up the good work guys, I have really enjoyed the content being produced by our bloggers. Current Winners: Enrique Lima (37 posts) - http://geekswithblogs.net/enriquelima Almost There: Stuart Brierley (28 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (26 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (23 posts) - http://geekswithblogs.net/iupdateable Coming Along: Liam McLennan (17 posts) - http://geekswithblogs.net/liammclennan Christopher House (13 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (13 posts) - http://geekswithblogs.net/mbcrump Steve Michelotti (10 posts) - http://geekswithblogs.net/michelotti Michael Freidgeim (9 posts) - http://geekswithblogs.net/mnf MarkPearl (9 posts) - http://geekswithblogs.net/MarkPearl Brian Schroer (8 posts) - http://geekswithblogs.net/brians Chris Williams (8 posts) - http://geekswithblogs.net/cwilliams CatherineRussell (7 posts) - http://geekswithblogs.net/CatherineRussell Shawn Cicoria (7 posts) - http://geekswithblogs.net/cicorias Matt Christian (7 posts) - http://geekswithblogs.net/CodeBlog James Michael Hare (7 posts) - http://geekswithblogs.net/BlackRabbitCoder John Blumenauer (7 posts) - http://geekswithblogs.net/jblumenauer Scott Dorman (7 posts) - http://geekswithblogs.net/sdorman   Technorati Tags: Standings,Geekswithblogs,30 in 60

    Read the article

  • SOA Galore: New Books for Technical Eyes Only By Bob Rhubart

    - by JuergenKress
    In my part of the world the weather has taken its seasonal turn toward the kind of cold, damp, miserable stuff that offers a major motivation to stay indoors. While I plan to spend some of the indoor time working my way through the new 50th anniversary James Bond box set, I will also devote some time to improve my mind rather than my martini-mixing skills by catching up on my reading. If you are in a similar situation, you might want to spend some of your time with these new technical books written by our community members: Oracle SOA Suite 11g Administrator's Handbook by Ahmed Aboulnaga and Arun Pareek Oracle SOA Suite 11g Developer's Cookbook by Antony Oracle BPM Suite 11g: Advanced BPMN Topics by Mark Nelson and Tanya Williams SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA books,BPM books,education,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • The Ins and Outs of Effective Smart Grid Data Management

    - by caroline.yu
    Oracle Utilities and Accenture recently sponsored a one-hour Web cast entitled, "The Ins and Outs of Effective Smart Grid Data Management." Oracle and Accenture created this Web cast to help utilities better understand the types of data collected over smart grid networks and the issues associated with mapping out a coherent information management strategy. The Web cast also addressed important points that utilities must consider with the imminent flood of data that both present and next-generation smart grid components will generate. The three speakers, including Oracle Utilities' Brad Williams, focused on the key factors associated with taking the millions of data points captured in real time and implementing the strategies, frameworks and technologies that enable utilities to process, store, analyze, visualize, integrate, transport and transform data into the information required to deliver targeted business benefits. The Web cast replay is available here. The Web cast slides are available here.

    Read the article

  • GWB: 5 yr anniversary

    - by Theo Moore
    Wow, just realized it's my 5 year anniverary on GeeksWithBlogs. Hard to believe so much time has passed. I paged back through some of my early posts, curious what sort of things about which I used to post. It's also interesting to see how my focus has changed and what really hasn't. I was also reminded that Chris Williams and I have been friends for that long. I don't blog nearly as often now as I used to do, but I still really like the GWB community, and I am honoured to be allowed to continue to be a part of it. Another 5 years ahead (or more), I hope. :-)

    Read the article

  • BPM Industry papers Financial Services & Insurance & Retail and BPM additional material

    - by JuergenKress
    Whitepaper: BPM for Financial Services Oracle BPM for Insurance Oracle BPM for Retail BPM 11g Patterns and Practices in Industry BPM Without Barriers Assessment: BPM Maturity - Online Self Assessment - Link New Book: "Oracle BPM Suite 11g: Advanced BPMN Topics" by Mark Nelson and Tanya Williams - Packt Publishing SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM FSI,BPM Insurance,BPM retail,BPM industries,BPM without barriers,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SOA Galore: New Books for Technical Eyes Only

    - by Bob Rhubart
    In my part of the world the weather has taken its seasonal turn toward the kind of cold, damp, miserable stuff that offers a major motivation to stay indoors. While I plan on spending some of that indoor time working my way through the new 50th anniversary James Bond box set, I will also devote some time to improving my mind rather than my martini-mixing skills by catching up on my reading. If you're in a similar situation, you might want to spend some of your time  with these new technical books written by community members: Oracle SOA Suite 11g Administrator's Handbook by Ahmed Aboulnaga and Arun Pareek. Oracle BPM Suite 11g: Advanced BPMN Topics by Mark Nelson and Tanya Williams Oracle SOA Suite 11g Developer's Cookbook by Antony Reynolds and Matt Wright (Coming in December; available for pre-order).

    Read the article

  • Twitter Tuesday - Top 10 @ArchBeat Tweets - May 27 - June 2, 2014

    - by OTN ArchBeat
    The Top 10 tweets from @OTNArchBeat for the last seven days, May 27- June 2, 2014.. RT @Java_EE: We changed the term from #J2EE and #JEE to Java EE in May 2006. Let's educate all users and especially recruiters. Retweet! May 30, 2014 at 12:00 AM Video: #kscope14 Preview: @timtow on Essbase Java API and @ODTUG Community Jun 02, 2014 at 12:00 AM #GoldenGate and #ODI - A Perfect Match in 12c - Part 1: Getting Started | Michael Rainey Jun 02, 2014 at 12:00 AM Podcast: Developing Enterprise Mobile Apps - Part 2 w/ @chriscmuir @fnimphiu @stevendavelaar @lucb_ May 29, 2014 at 12:00 AM Caveats on Using #WebLogic Server with JDK7 | @JayJayZheng May 28, 2014 at 12:00 AM SOA and Business Processes: You are the Process! @gschmutz @dschmied @t_winterberg et al #industrialsoa May 27, 2014 at 12:00 AM Video: #Kscope14 Preview: Data Modeling and Moving Meditation with @KentGraziano May 28, 2014 at 12:00 AM #Kscope14 Preview: @ericerikson on #HFM Metadata Diagnostics and more @ODTUG Jun 02, 2014 at 12:00 AM Extract Data from #FusionApps via Web Services | Richard Williams May 29, 2014 at 12:00 AM Top 10 @ArchBeat Tweets - May 20-26 #KScope14 #OBIEE #WebLogic #WebCenter May 27, 2014 at 12:00 AM

    Read the article

  • Réparer les dégâts du faux-positif de McAfee pourrait prendre des semaines, la société "étudie une m

    Mise à jour du 23/04/10 Réparer les dégâts du faux-positif de McAfee prendrait 30 minutes Par PC, la société étudie "une manière pour que cela ne se reproduise plus jamais" Amrit Williams, directeur de la technologie de Big Fix - une société spécialisée dans la gestion de la sécurité informatique - estime que les problèmes provoqués par le faux-positif de la mise à jour de McAfee (lire ci-avant) vont être particulièrement épineux à résoudre. Dans une déclaration au quotidien américain USA Today, il affirme qu'il « n'y a aucun moyen d'automatiser le processus ». D'après lui, réparer les dégâts devra se faire mac...

    Read the article

  • Google célèbre le 65e anniversaire de Baby, le premier ordinateur à avoir exécuté un programme sauvegardé dans sa mémoire

    Google célèbre le 65e anniversaire de Baby, le premier ordinateur à avoir exécuté un programme sauvegardé dans sa mémoireIl y a 65 ans, le 21 juin 1948 naissait la Small-Scale Experimental Machine (SSEM) littéralement la Machine Expérimentale à Petite Echelle aussi connue sous le nom de Baby. Le SSEM a été conçu par « Freddie » Williams, Tom Kilburn et Geoff Tootill à l'Université de Manchester.Google a célébré l'anniversaire de Baby, la toute première machine à exécuter un programme électronique sauvegardé dans sa mémoire. Avant lui, les ordinateurs exécutaient des instructions à partir de matériel externe comme des cartes.« ...

    Read the article

  • Maximum number of web parts/web part zones per page? (Microsoft SharePoint 2007)

    - by Kache4
    I've already found the max number of web parts per page: Customizable - in web.config file, <configuration><SharePoint><WebPartLimits MaxZoneParts="XX" /> 50 (default) - http://technet.microsoft.com/en-us/library/cc262787.aspx 100 (recommended max) - http://technet.microsoft.com/en-us/library/cc287743.aspx However, I've been unable to find: the maximum number of web parts per web part zone the maximum number of web part zones per page

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • Large file uploads from web pages

    - by jerrygarciuh
    Hi folks, I code primarily in PHP and Perl. I have a client who is insisting on seeking video submissions (any encoding) from the public via one of their pages rather than letting YouTube do its job. Server in question is a virtual machine and I can adjust ini settings for max post, max upload size etc as needed. My initial thought is to use a Flash based uploader with PHP on the back end but I wondered if someone might have useful advice and experience on the subject? Peace JG

    Read the article

  • Simple LINQ Aggregate Query

    - by Steven
    What is the vb.net equivalent of the following psuedo-code using LINQ? select min(credits) minCredits, max(credits) maxCredits, min(dollars) minDollars, max(dollars) maxDollars from players minCredits_lbl.Text = minCredits ... maxDollars_lbl.Text = maxDollars I have the following, but I can't figure out how to get any further. Dim query = From row in myDataSet.Tables("Players") _ Select credits = row("credits"), dollars = row("dollars")

    Read the article

  • CrystalReports.NET - varbinary blob

    - by BhejaFry
    Hi folks, what's the proper way to display an image stored in sqlserver database as an varbinary(max) datatype in CrystalReports for .NET? I have added a 'blobfieldobject' item in crystal reports & it is bound to a datatable with the column of type 'varbinary(max)' but the image won't show up instead a dark background is diplayed. TIA

    Read the article

  • Lawler's Algorithm Implementation Assistance

    - by Richard Knop
    Here is my implemenation of Lawler's algorithm in PHP (I know... but I'm used to it): <?php $jobs = array(1, 2, 3, 4, 5, 6); $jobsSubset = array(2, 5, 6); $n = count($jobs); $processingTimes = array(2, 3, 4, 3, 2, 1); $dueDates = array(3, 15, 9, 7, 11, 20); $optimalSchedule = array(); foreach ($jobs as $j) { $optimalSchedule[] = 0; } $dicreasedCardinality = array(); for ($i = $n; $i >= 1; $i--) { $x = 0; $max = 0; // loop through all jobs for ($j = 0; $j < $i; $j++) { // ignore if $j already is in the $dicreasedCardinality array if (false === in_array($j, $dicreasedCardinality)) { // if the job has no succesor in $jobsSubset if (false === isset($jobs[$j+1]) || false === in_array($jobs[$j+1], $jobsSubset)) { // here I find an array index of a job with the maximum due date // amongst jobs with no sucessor in $jobsSubset if ($x < $dueDates[$j]) { $x = $dueDates[$j]; $max = $j; } } } } // move the job at the end of $optimalSchedule $optimalSchedule[$i-1] = $jobs[$max]; // decrease the cardinality of $jobs $dicreasedCardinality[] = $max; } print_r($optimalSchedule); Now the above returns an optimal schedule like this: Array ( [0] => 1 [1] => 1 [2] => 1 [3] => 3 [4] => 2 [5] => 6 ) Which doesn't seem right to me. The problem might be with my implementation of the algorithm because I am not sure I understand it correctly. I used this source to implement it: http://www.google.com/books?id=aSiBs6PDm9AC&pg=PA166&dq=lawler%27s+algorithm+code&lr=&hl=sk&cd=4#v=onepage&q=&f=false The description there is a little confusing. For example, I didn't quite get how is the subset D defined (I guess it is arbitrary). Could anyone help me out with this? I have been trying to find some sources with simpler explanation of the algorithm but all sources I found were even more complicated (with math proofs and such) so I am stuck with the link above. Yes, this is a homework, if it wasn't obvious. I still have few weeks to crack this but I have spent few days already trying to get how exactly this algorithm works with no success so I don't think I will get any brighter during that time.

    Read the article

  • LINQ Query Help!

    - by rk1962
    Can someone hep me converting the following SQL query into LINQ? select convert(varchar(10),date,110) as 'Date', max(users) as 'Maximum Number of Users', max(transactions) as 'Maximum Number of Transactions' from stats where datepart(Year, Date) = '2010' group by convert(varchar(10),date,110) order by convert(varchar(10),date,110) Thank you in advance!

    Read the article

  • SQL Server 2008 Geography .STBuffer() distance measurement units

    - by Chris
    I'm working with a geographic point using lat/long and need to find other points in our database within a 5 mile radius of that point. However, I can't seem to find out what the "units" are for STBuffer, it doesn't seem to conform to feet, miles, meters, kilometers, etc. The documentation only refers to them as "units", any suggestions? Thanks [...] from geography::STGeomFromText('POINT(x y)', 4326).STBuffer(z).STIntersects(geography::STGeomFromText('POINT(' + CAST(v.Longitude as varchar(max)) + ' ' + CAST(v.Latitude as varchar(max)) + ')', 4326)) = 1

    Read the article

  • Initial Genetic Programming Parameters

    - by cmptrer
    I did a little GP (note:very little) work in college and have been playing around with it recently. My question is in regards to the intial run settings (population size, number of generations, min/max depth of trees, min/max depth of initial trees, percentages to use for different reproduction operations, etc.). What is the normal practice for setting these parameters? What papers/sites do people use as a good guide?

    Read the article

  • In a SQL GROUP BY query, what value is used for the non-aggregate columns?

    - by Queencity13
    Say I've got the following data back from a SQL query: Lastname Firstname Age Anderson Jane 28 Anderson Lisa 22 Anderson Jack 37 If I want to know the age of the oldest person with the last name Anderson, I can select MAX(Age) and GROUP BY Lastname. But I also want to know the first name of that oldest person. How can I make sure that, when the Firstname values are collapsed into one row by the GROUP BY, I get the Firstname value from the same row where I got the max age?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >