Search Results

Search found 366 results on 15 pages for 'gary chambers'.

Page 3/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • SVN Externals in a different SCM

    - by Sean Chambers
    At a previous workplace we used svn externals to update dependent projects when a shared component was updated. This made it easy to see anything that those changes broke, as well as update dependent projects to the latest version of a shared component automatically without any intervention. At a new workplace we are using cc.net with surround scm and I'm trying to find something similar in surround. I haven't found anything like externals, only "shared files", but unlike externals, the shared files doesn't allow you to point at a specific revision of a file for the external. I'm interested in what other people are doing in these scenarios to lean on their continuous integration and treat it more for integration than a "continuous build" server. Does anyone know of a tool or something to do "externals" behavior without using svn? I suppose having an xml registry file of which projects depend on which assemblies and if they should be using the latest version but this seems like overkill.

    Read the article

  • Using asp.net mvc model binders generically

    - by Sean Chambers
    I have a hierarchy of classes that all derive from a base type and the base type also implements an interface. What I'm wanting to do is have one controller to handle the management of the entire hierarchy (as the actions exposed via the controller is identical). That being said, I want to have the views have the type specific fields on it and the model binder to bind against a hidden field value. something like: <input type="text" name="model.DerivedTypeSpecificField" /> <input type="hidden" name="modelType" value="MyDerivedType" /> That being said, the asp.net mvc model binders seem to require the concrete type that they will be creating, because of that reason I would need to create a different controller for every derived type. Has anyone does this before or know how to manipulate the model binder to behave in this way? I could write my own model binder, but I'm not wanting anything past the basic model binding behavior of assign properties and building arrays on the target type. Thanks!

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs

    - by Matthew Chambers
    Hello I am getting the below message on a table i am trying to create The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs Anyone know the answer to this please -- Table warrington_central.job -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS warrington_central.job ( id MEDIUMINT(8) UNSIGNED NOT NULL AUTO_INCREMENT , alias_title VARCHAR(255) NOT NULL , reference_number VARCHAR(100) NOT NULL , title VARCHAR(255) NOT NULL , primary_category SMALLINT(5) UNSIGNED NOT NULL , secondary_category SMALLINT(5) UNSIGNED NOT NULL , tertiary_category SMALLINT(5) UNSIGNED NULL , address_id BIGINT(20) UNSIGNED NOT NULL , geolocation_id BIGINT(20) UNSIGNED NULL , company VARCHAR(255) NOT NULL , description VARCHAR(10000) NOT NULL , skills_required VARCHAR(10000) NOT NULL , job_type TINYINT(2) UNSIGNED NOT NULL , experience_months_required TINYINT(2) UNSIGNED NOT NULL , experience_years_required TINYINT(2) UNSIGNED NOT NULL , salary_range VARCHAR(30) NOT NULL , extra_benefits_above_salary VARCHAR(500) NOT NULL , available_from DATE NULL , available_to DATE NULL , extra_location_details VARCHAR(1000) NOT NULL , contact_email VARCHAR(100) NOT NULL , contact_phone_number VARCHAR(20) NOT NULL , contact_mobile_number VARCHAR(20) NOT NULL , terms_conditions_application VARCHAR(5000) NOT NULL , link_to_profile ENUM('0','1') NOT NULL , created_on DATETIME NOT NULL , updated_on DATETIME NOT NULL , updated_by BIGINT(20) UNSIGNED NOT NULL , add_contact_form ENUM('0','1') NOT NULL , admin_package_id TINYINT(1) UNSIGNED NOT NULL , package_start_date DATETIME NOT NULL , package_end_date DATETIME NULL , package_comment VARCHAR(500) NOT NULL , viewable_to_members_only ENUM('0','1') NOT NULL , advertise_to DATETIME NULL , show_comment ENUM('0','1') NOT NULL , hits BIGINT(20) UNSIGNED NOT NULL DEFAULT 0 , visible ENUM('0','1') NOT NULL DEFAULT '0' , approved ENUM('I/* large SQL query (3.9 KB), snipped at 2,000 characters / / SQL Error (1118): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs */ SHOW WARNINGS;

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • Using Nuxeo, how do I lock down a page so that it redirects to the login page if the user is unauthe

    - by Aaron Chambers
    I have been put on to a project using Nuxeo, late in it's lifecycle and need to change a few things before it goes live. I am having trouble finding out where I need to look to lock down a Nuxeo based application so that a user is redirected to the login page if they are unauthorised and access a restricted page. Can someone please shoot my some direction on where this sort of logic is kept or defined?

    Read the article

  • php soapclient returns null but getPreviousResults has proper results

    - by Joseph.Chambers
    I've ran into trouble with SOAP, I've never had this issue before and can't find any information on line that helps me solve it. The following code $wsdl = "path/to/my/wsdl"; $client = new SoapClient($wsdl, array('trace' => true)); //$$textinput is passed in and is a very large string with rows in <item></item> tags $soapInput = new SoapVar($textinput, XSD_ANYXML); $res = $client->dataprofilingservice(array("contents" => $soapInput)); $response = $client->__getLastResponse(); var_dump($res);//outputs null var_dump($response);//provides the proper response as I would expect. I've tried passing params into the SoapClient constructor to define soap version but that didnt' help. I've also tried it with the trace param set to false and not present which as expected made $response null but $res was still null. I've tried the code on both a linux and windows install running Apache. The function definition in the WSDL is (xxxx is for security reasons) <portType name="xxxxServiceSoap"> <operation name="dataprofilingservice"> <input message="tns:dataprofilingserviceSoapIn"/> <output message="tns:dataprofilingserviceSoapOut"/> </operation> </portType> I have it working using the __getLastResponse() but its annoying me it will not work properly. I've put together a small testing script, does anyone see any issues here. //very simplifed dataset that would normally be //read in from a CSV file of about 1mb $soapInput = getSoapInput("asdf,qwer\r\nzzxvc,ewrwe\r\n23424,2113"); $wsdl = "path to wsdl"; try { $client = new SoapClient($wsdl,array('trace' => true,'exceptions' => true)); } catch (SoapFault $fault) { $error = 1; var_dump($fault); } try { $res = $client->dataprofilingservice(array("contents" => $soapInput)); $response = $client->__getLastResponse(); echo htmlentities($client->__getLastRequest()); echo '<hr>'; var_dump($res); echo "<hr>"; echo(htmlentities($response)); } catch (SoapFault $fault) { $error = 1; var_dump($fault); } function getSoapInput($input){ $rows = array(); $userInputs = explode("\r\n", $input); $userInputs = array_filter($userInputs); // $inputTemplate = " <contents>%s</contents>"; $rowTemplate = "<Item>%s</Item>"; // $soapString = ""; foreach ($userInputs as $row) { // sanitize $row = htmlspecialchars(addslashes($row)); $xmlStr = sprintf($rowTemplate, $row); $rows[] = $xmlStr; } $textinput = sprintf($inputTemplate, implode(PHP_EOL, $rows)); $soapInput = new SoapVar($textinput, XSD_ANYXML); return $soapInput; }

    Read the article

  • How would I authenticate against a local windows user on another machine in an ASP.NET application?

    - by Daniel Chambers
    In my ASP.NET application, I need to be able to authenticate/authorise against local Windows users/groups (ie. not Active Directory) on a different machine, as well as be able to change the passwords of said remote local Windows accounts. Yes, I know Active Directory is built for this sort of thing, but unfortunately the higher ups have decreed it needs to be done this way (so authentication against users in a database is out as well). I've tried using DirectoryEntry and WinNT like so: DirectoryEntry user = new DirectoryEntry(String.Format("WinNT://{0}/{1},User", serverName, username), username, password, AuthenticationTypes.Secure) but this results in an exception when you try to log in more than one user: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I've tried making sure my DirectoryEntries are used inside a using block, so they're disposed properly, but this doesn't seem to fix the issue. Plus, even if that did work it is possible that two users could hit that line of code concurrently and therefore try to create multiple connections, so it would be fragile anyway. Is there a better way to authenticate against local Windows accounts on a remote machine, authorise against their groups, and change their passwords? Thanks for your help in advance.

    Read the article

  • Vim navigation clunkiness

    - by Sean Chambers
    I've committed myself to diving into vim to become faster at writing code for ruby/python and I'm having a hard time navigating around files. Mainly, I'm referring to switching between insert mode and navigation modes. Maybe I'm just not completely used to the editor yet but it feels very awkward to constantly be switching in and out of insert mode. Is this something that will go away with time? Are there any tricks to getting quicker at moving in and out of insert mode?

    Read the article

  • c# .net MVC4 Model to represent table or form

    - by Matthew Chambers
    Hello I am a little confused with regards to models in mvc 4 and thought someone may be able to point me in the right direction. This would be most appreciated. For example if i have a table that has the following fields [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 5)] public string UserName { get; set; } [Required(ErrorMessage="Email Address is Required")] [StringLength(15, ErrorMessage = "Email Address must be between {0} and {1} in size",MinimumLength = 5 )] [DataType(DataType.EmailAddress)] [Display(Name="Email")] public string Email { get; set; } [MaxLength(25)] [Display(Name="Mobile Telephone Number")] public string Mobile {get;set;} [MaxLength(500)] [Display(Name="Headline")] public string Headline {get;set;} [Required] [StringLength(200)] [Display(Name = "First Name")] public string FirstName {get;set;} [Required] [StringLength(200)] [Display(Name="Surname")] public string Surname { get; set;} public virtual int? DayOfBirthId { get; set; } public virtual DayOfBirth DayOfBirth { get; set; } public virtual int? MonthOfBirthId { get; set; } public virtual MonthOfBirth MonthOfBirth { get; set; } public virtual int? YearOfBirthId { get; set; } public virtual YearOfBirth YearOfBirth{get;set;} This is my user profile table in the database. However I would like a form that the user registers to the site with. When they first register i do not need all the details such as telephone all i really need is there username, email address and password. Do i create another model for this. Or do i have one model and on the controller set the fields to null or empty string that are not required on registration. I have validation also so would this be set for data that has not been entered on the form. My question is ultimately should all forms represent models- should the database be redesigned to meet this required. Or should the controller set the values that are not required. Or should there be another model that represents the form be created which maps to this table. I am a little confused on this and clarification of anyone would be most appreciated.

    Read the article

  • Get Exchange Online Mailbox Size in GB

    - by Brian Jackett
    As mentioned in my previous post I was recently working with a customer to get started with Exchange Online PowerShell commandlets.  In this post I wanted to follow up and show one example of a difference in output from commandlets in Exchange 2010 on-premises vs. Exchange Online.   Problem    The customer was interested in getting the size of mailboxes in GB.  For Exchange on-premises this is fairly easy.  A fellow PFE Gary Siepser wrote an article explaining how to accomplish this (click here).  Note that Gary’s script will not work when remoting from a local machine that doesn’t have the Exchange object model installed.  A similar type of scenario exists if you are executing PowerShell against Exchange Online.  The data type for TotalItemSize  being returned (ByteQuantifiedSize) exists in the Exchange namespace.  If the PowerShell session doesn’t have access to that namespace (or hasn’t loaded it) PowerShell works with an approximation of that data type.    The customer found a sample script on this TechNet article that they attempted to use (minor edits by me to fit on page and remove references to deleted item size.)   Get-Mailbox -ResultSize Unlimited | Get-MailboxStatistics | Select DisplayName,StorageLimitStatus, ` @{name="TotalItemSize (MB)"; expression={[math]::Round( ` ($_.TotalItemSize.Split("(")[1].Split(" ")[0].Replace(",","")/1MB),2)}}, ` ItemCount | Sort "TotalItemSize (MB)" -Descending | Export-CSV "C:\My Documents\All Mailboxes.csv" -NoTypeInformation     The script is targeted to Exchange 2010 but fails for Exchange Online.  In Exchange Online when referencing the TotalItemSize property though it does not have a Split method which ultimately causes the script to fail.   Solution    A simple solution would be to add a call to the ToString method off of the TotalItemSize property (in bold on line 5 below).   Get-Mailbox -ResultSize Unlimited | Get-MailboxStatistics | Select DisplayName,StorageLimitStatus, ` @{name="TotalItemSize (MB)"; expression={[math]::Round( ` ($_.TotalItemSize.ToString().Split("(")[1].Split(" ")[0].Replace(",","")/1MB),2)}}, ` ItemCount | Sort "TotalItemSize (MB)" -Descending | Export-CSV "C:\My Documents\All Mailboxes.csv" -NoTypeInformation      This fixes the script to run but the numerous string replacements and splits are an eye sore to me.  I attempted to simplify the string manipulation with a regular expression (more info on regular expressions in PowerShell click here).  The result is a workable script that does one nice feature of adding a new member to the mailbox statistics called TotalItemSizeInBytes.  With this member you can then convert into any byte level (KB, MB, GB, etc.) that suits your needs.  You can download the full version of this script below (includes commands to connect to Exchange Online session). $UserMailboxStats = Get-Mailbox -RecipientTypeDetails UserMailbox ` -ResultSize Unlimited | Get-MailboxStatistics $UserMailboxStats | Add-Member -MemberType ScriptProperty -Name TotalItemSizeInBytes ` -Value {$this.TotalItemSize -replace "(.*\()|,| [a-z]*\)", ""} $UserMailboxStats | Select-Object DisplayName,@{Name="TotalItemSize (GB)"; ` Expression={[math]::Round($_.TotalItemSizeInBytes/1GB,2)}}   Conclusion    Moving from on-premises to the cloud with PowerShell (and PowerShell remoting in general) can sometimes present some new challenges due to what you have access to.  This means that you must always test your code / scripts.  I still believe that not having to physically RDP to a server is a huge gain over some of the small hurdles you may encounter during the transition.  Scripting is the future of administration and makes you more valuable.  Hopefully this script and the concepts presented help you be a better admin / developer.         -Frog Out     Links The Get-MailboxStatistics Cmdlet, the TotalitemSize Property, and that pesky little “b” http://blogs.technet.com/b/gary/archive/2010/02/20/the-get-mailboxstatistics-cmdlet-the-totalitemsize-property-and-that-pesky-little-b.aspx   View Mailbox Sizes and Mailbox Quotas Using Windows PowerShell http://technet.microsoft.com/en-us/exchangelabshelp/gg576861#ViewAllMailboxes   Regular Expressions with Windows PowerShell http://www.regular-expressions.info/powershell.html   “I don’t always test my code…” image http://blogs.pinkelephant.com/images/uploads/conferences/I-dont-always-test-my-code-But-when-I-do-I-do-it-in-production.jpg   The One Thing: Brian Jackett and SharePoint 2010 http://www.youtube.com/watch?v=Sg_h66HMP9o

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • links for 2011-02-03

    - by Bob Rhubart
    Webcast: Reduce Complexity and Cost with Application Integration and SOA Speakers: Bruce Tierney (Product Director, Oracle Fusion Middleware) and Rajendran Rajaram (Oracle Technical Consultant). Thursday, February 17, 2011. 10 a.m. PT/1 p.m. ET. (tags: oracle otn soa fusionmiddleware) William Vambenepe: The API, the whole API and nothing but the API William asks: "When programming against a remote service, do you like to be provided with a library (or service stub) or do you prefer 'the API, the whole API, nothing but the API?'" (tags: oracle otn API webservices soa) Gary Myers: Fluffy white Oracle clouds by the hour Gary says: "Pay-by-the-hour options are becoming more common, with Amazon and Oracle are getting even more intimate in the next few months. Yes, you too will be able to pay for a quickie with the king of databases (or queen if you prefer that as a mental image). " (tags: oracle otn cloudcomputing amazon ec2) Conversation as User Assistance (the user assistance experience) "To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies." -- Ultan O'Broin (tags: oracle otn enterprise2.0 userexperience) Webcast: Oracle WebCenter Suite – Giving Users a Modern Experience Thursday, February 10, 2011. 11 a.m. PT/2 p.m. ET. Speakers: Vince Casarez, Vice President of Enterprise 2.0 Product Management, Oracle; Erin Smith, Consulting Practice Manager – Portals, Oracle; Robert Wessa, Consulting Technical Director,  Enterprise 2.0 Infrastructure, Oracle.  (tags: oracle otn enterprise2.0 webcenter)

    Read the article

  • Oh snap! My RPi was upgraded to 512MB! Woo-hoo!

    - by hinkmond
    I ordered a Raspberry Pi Model B (256MB) over 4 months ago on backorder. When it finally came I saw it was upgraded to the new half a gig model! Woot! But, all was not perfect. Gary C. told me the shipped configuration of the new RPi models didn't have the right firmware for 512MB, and I had to upgrade the start.elf in the /boot directory to recognize all of the 512MB RAM. I did a "free" command, and sure enough saw only 240MB. Sadness. But, Gary gave me a copy of his start.elf which worked after some trail and error. For anyone ordering the new RPi Model B w/512MB, here are the steps to get you going with full 512MB RAM: sudo apt-get update --fix-missing sudo apt-get upgrade --fix-missing # NOTE: This step takes at least a couple hours on a # fast network wget https://raw.github.com/raspberrypi/firmware/\ 164b0fe2b3b56081c7510df93bc1440aebe45f7e/boot/\ arm496_start.elf sudo mv /boot/start.elf /boot/orig-start.elf sudo mv arm496_start.elf /boot/start.elf sudo reboot free total used free shared buffers cached Mem: 497768 210596 287172 0 16892 169624 -/+ buffers/cache: 24080 473688 Swap: 102396 0 102396 So of course this means... (drumroll) there is now 498MB available for the Java Embedded heap! java -Xmx400m -version java version "1.7.0_06" Java(TM) SE Embedded Runtime Environment (build 1.7.0_06-b24, headless) Java HotSpot(TM) Embedded Client VM (build 23.2-b09, mixed mode) Yeah, baby! Hinkmond

    Read the article

  • Locating Rogue Perl Script

    - by Gary Garside
    I've been trying to source the location of a perl script which is causing havoc on a server which i control. I'm also trying to find out exactly how this script was installed on the server - my best guess is through a wordpress exploit. The server is a basic web setup running Ubuntu 9.04, Apache and MySQL. I use IPTables for firewall, the site runs around 20 sites and the load never really creeps above 0.7. From what i can see the script is making outbound connection to other servers (most likely trying to brute force entry). Here is a top dump of one of the processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22569 www-data 20 0 22784 3216 780 R 100 0.2 47:00.60 perl The command the process is running is /usr/sbin/sshd . I've tried to find an exact file name but im having no luck... i've ran a lsof -p PID and here is the output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME perl 22569 www-data cwd DIR 8,6 4096 2 / perl 22569 www-data rtd DIR 8,6 4096 2 / perl 22569 www-data txt REG 8,6 10336 162220 /usr/bin/perl perl 22569 www-data mem REG 8,6 26936 170219 /usr/lib/perl/5.10.0/auto/Socket/Socket.so perl 22569 www-data mem REG 8,6 22808 170214 /usr/lib/perl/5.10.0/auto/IO/IO.so perl 22569 www-data mem REG 8,6 39112 145112 /lib/libcrypt-2.9.so perl 22569 www-data mem REG 8,6 1502512 145124 /lib/libc-2.9.so perl 22569 www-data mem REG 8,6 130151 145113 /lib/libpthread-2.9.so perl 22569 www-data mem REG 8,6 542928 145122 /lib/libm-2.9.so perl 22569 www-data mem REG 8,6 14608 145125 /lib/libdl-2.9.so perl 22569 www-data mem REG 8,6 1503704 162222 /usr/lib/libperl.so.5.10.0 perl 22569 www-data mem REG 8,6 135680 145116 /lib/ld-2.9.so perl 22569 www-data 0r FIFO 0,6 157216 pipe perl 22569 www-data 1w FIFO 0,6 197642 pipe perl 22569 www-data 2w FIFO 0,6 197642 pipe perl 22569 www-data 3w FIFO 0,6 197642 pipe perl 22569 www-data 4u IPv4 383991 TCP outsidesoftware.com:56869->server12.34.56.78.live-servers.net:www (ESTABLISHED) My gut feeling is outsidesoftware.com is also under attacK? Or possibly being used as a tunnel. I've managed to find a number of rouge files in /tmp and /var/tmp, here is a brief output of one of these files: #!/usr/bin/perl # this spreader is coded by xdh # xdh@xxxxxxxxxxx # only for testing... my @nickname = ("vn"); my $nick = $nickname[rand scalar @nickname]; my $ircname = $nickname[rand scalar @nickname]; #system("kill -9 `ps ax |grep httpdse |grep -v grep|awk '{print $1;}'`"); my $processo = '/usr/sbin/sshd'; The full file contents can be viewed here: http://pastebin.com/yenFRrGP Im trying to achieve a couple of things here... Firstly i need to stop these processes from running. Either by disabling outbound SSH or any IP Tables rules etc... these scripts have been running for around 36 hours now and my main concern is to stop these things running and respawning by themselves. Secondly i need to try and source where and how these scripts have been installed. If anybody has any advise on what to look for in access logs or anything else i would be really grateful. Thanks in advance

    Read the article

  • Changing default gateway on workstations connected to Windows Domain SBS server

    - by Gary B2312321321
    We have xp workstations connected onto a small business server acting as active directory/isa firewall/proxy (no dhcp). Is there a reason that after installing a 2nd firewall on the network (same subnet etc), that changing the default gateway on the workstations isnt sufficient to route inet traffic through the new firewall? A freshly setup linux box connects straight on to the alternate firewall with just ip, default gateway. dns settings. Will having ISA still active on the network confuse the process? Are there further config settings deeper down in windows that need attention? Any ideas pointers on this would be appreciated? Other info: Firewalls tried: Smoothwall and Ipcop; small ethernet netwoork 40 pcs; can ping to new firwalls from workstations; activating web proxy on new firewall and reconfiguring workstation browser works fine; Point of 2nd firewall is lack of some necessary features on ISA for a linux app; Would be nice to have some redundancy to though

    Read the article

  • How to stop Excel 2003 from loading a Gazillion files

    - by Gary M. Mugford
    One of my soon-to-be-ex-friends got an Excel file from another friend of his and decided to click on it. It started opening all kinds of files from within Excel. Over 200 and still counting when he called me. I told him to go to task manager, which showed a LOT of files in the applications tab, but only one Excel.exe in the processes tab. Closing it down there, closed down Excel. I then CrossLooped in to see if I could give him a helping hand. Each time Excel was re-opened, the mass influx of files started. They were all kinds of files, PDFs, Docs, JPGs, even some spreadsheets. It looked like the end of solitaire, with multiple windows opening (XP) and the counter on the lone Excel button on the task bar counting off the files. I did the task manager exit routine and went looking for temp files. I CrapCleaned out the system. Made sure I went through the files created in the last hour and deleted anything with a temp anywhere in it. I also deleted the crappy infected/corrupted file from it's place on the desktop (yeah, I know, I yelled for 15 minutes on THAT subject). Despite a delousing, the restart of Excel, which complained of a deactivated add-in, would start the cascading windows, whether I answered yes or no to that question. Yes, it knew it had a serious crash, but why would it just keep on trying to load the bad file, even when I got rid of it? But here's the real question. WHERE was it loading from? I went through the backup folder and NOTHING was there! So what's the process for starting Excel WITHOUT it trying to do a crash recovery? Sort of makes me feel stupid at times. Thanks for any light you can shed on this issue. GM

    Read the article

  • Compiling Mono on Fedora 7

    - by Gary
    Trying to install Mono from source on Fedora 7.. running # ./configure --prefix=/opt/mono works fine, but doing the make # make ; make install ends up with the following: Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' make install-local make[6]: Entering directory `/opt/mono-2.6.4/mcs/mcs' Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' MCS [basic] mcs.exe typemanager.cs(2047,40): error CS0103: The name `CultureInfo' does not exist in the context of `Mono.CSharp.TypeManager' Compilation failed: 1 error(s), 0 warnings make[6]: *** [../class/lib/basic/mcs.exe] Error 1 make[6]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[5]: *** [do-install] Error 2 make[5]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[4]: *** [install-recursive] Error 1 make[4]: Leaving directory `/opt/mono-2.6.4/mcs' make[3]: *** [profile-do--basic--install] Error 2 make[3]: Leaving directory `/opt/mono-2.6.4/mcs' make[2]: *** [profiles-do--install] Error 2 make[2]: Leaving directory `/opt/mono-2.6.4/mcs' make[1]: *** [install-exec] Error 2 make[1]: Leaving directory `/opt/mono-2.6.4/runtime' make: *** [install-recursive] Error 1 I've been following the instructions at http://ruakuu.blogspot.com/2008/06/installing-and-configuring-opensim-on.html. This is all in an effort to get OpenSimulator running.

    Read the article

  • Mouse click-focus wanders in vmPlayer 3.0 dual-monitor

    - by Gary M. Mugford
    Previously, a WinXPSP3 session running on a WinXPSP3 host computer ran perfectly fine in a dual monitor setup. No issues with vmPlayer 2.x. BEFORE updating to vmPlayer 3, the following problem cropped up. When clicking in a single monitor, you would get exactly what you expected. However, when the display was stretched across two monitors, the clicking would be to the left of the mouse cursor. The farther RIGHT you were, the farther left the click would occur. In other words, if you clicked on the system menu of a window in the upper left of a window on the left monitor, you would get the system menu. Move half a screen to your right and the click would be on an item about a quarter of the way over, rather than where you were clicking. And by going all the way to the far right of the right monitor, you could bring up a right-click menu on the far right of the LEFT monitor. I Hope I have described this properly. It's confusing, even in words. In single monitor mode, everything works perfectly fine. If, instead of using either UltraMon or DisplayFusion, you run a single desktop across both monitors (3200x1600), there are no mousing issues. Unfortunately, having two 1600x1200 monitors, that depth of 1600 makes that hack less than useable. My graphic card won't offer anything resembling 3200x1200. vmPlayer 3.0 did not alleviate the situation. The microsoft mouse drivers are up to date and so are the nVidia card drivers. Any ideas?

    Read the article

  • How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by Gary
    I have a VPS in the USA running Ubuntu. I want to setup something similar to http://www.usvideo.org Basically, USVIDEO is a DNS service that allows Canadians to access American content like Hulu, Netflix, NBC, and etc (restricted by geographical IP). Here is how I think USVideo does it: Clients (PS3, XBOX, PC) specifies the DNS server(s) as specified on USVIDEO.org's website. If the DNS request is a video/audio site such as Netflix or Pandora, forward the request to a proxy. Otherwise, for all other requests, forward it to a different DNS server. If the specific video/audio URL is requested, return the address of the proxy server, which in turn relays traffic to the destination video/audio domain via the U.S. gateway so that it appears that the access is coming from a U.S. IP address. Once the DNS request has passed the U.S. IP address check, their proxy server steps out of the loop and lets the video streaming site contact you directly to start the video stream. This trick relies on the way that the video streaming sites check the country of your IP address once up front, but don't actually check the country of the destination IP address while the video is streaming. What is elegant about this solution is that a VPN Tunnel is not required to bypass geographical IP checks from certain websites. All that is required on the client side is to specify the DNS server (the VPS). If a certain site is geographically locked, just forward the traffic to a proxy, and that's it. These sites can be specified in the DNS entries, or perhaps in the proxy service to redirect the DNS request to its own proxy. I believe what I need to setup something similar is Squid Proxy, IPTables, and DNS. What I need help is how to exactly approach this? Would Squid Proxy be setup as a transparent proxy?

    Read the article

  • How to black out second monitor when playing a full screen game?

    - by Gary
    I've got two monitors. When I play a full screen game in my primary monitor, the second monitor still shows the Windows desktop. How can I black out the second monitor so that it doesn't show anything at all? My graphics card is from ATI. I'm running Windows 7 in Boot Camp on an iMac. I don't know if this makes a difference, however, as any solution to my question will do, really. For instance, is there a tiny app that I could run to disable the second monitor for a while? I'd rather not have to open Windows display settings every time I run a game and disable the monitor that way, though.

    Read the article

  • Is On-The-Fly string replacement possible using GreaseMonkey and Firefox

    - by Gary M. Mugford
    I have looked for means to stop Brightcove videos from autostarting in Firefox and have come to the conclusion it isn't possible without external programming via something like Grease Monkey. However, I'm not proficient in javascript let alone GM. So I thought I'd ask here first whether what I want to do is feasible, or whether it's a fool's errand. What I want to accomplish is have a site specific script executed to replace a string value on the run in that site's code. Specifically, what I am looking for is something GM-style that would do this: if site_domain = 'www.SiteWithAutoPlayVideos.com' then replace_all('<param name="autoStart" value="true" />', '<param name="autoStart" value="false" />'); Having looked through Super User for anything GreaseMonkey that might relate, I see notices that the sandbox GM executes scripts in has to remain separate for security reasons. So, I suspect I might be in for disappointment. BUT if it is accomplishable and somebody here can confirm it, then I will do my best to struggle through the learning curve and get this noisome little problem put to rest. Yes, I have tried Flash Block and FlashDisable in order to attack this issue with no avail. Thanks in advance for your time.

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >