Search Results

Search found 8532 results on 342 pages for 'packet examples'.

Page 254/342 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • suddenly can't connect to router

    - by Khoi
    I was just downloading some stuff in ubuntu and snap, the connection cut and I can't even connect to my router. And the router, it still works fine, my laptop can connect wirelessly to it as usual. But my main computer (which connects to it directly through cable) can't even ping it. Here is my ipconfig: Windows IP Configuration Host Name . . . . . . . . . . . . : vento Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Realtek RTL8169/8110 Family Gigabit Ethernet NIC Physical Address. . . . . . . . . : 00-19-DB-4E-6C-56 Ethernet adapter {15B1F740-2F35-4FE4-9FEE-4052AFBAD096}: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Anchorfree HSS Adapter - Packet Sche duler Miniport Physical Address. . . . . . . . . : 00-FF-15-B1-F7-40

    Read the article

  • Is this a valid implementation of the repository pattern?

    - by user1578653
    I've been reading up about the repository pattern, with a view to implementing it in my own application. Almost all examples I've found on the internet use some kind of existing framework rather than showing how to implement it 'from scratch'. Here's my first thoughts of how I might implement it - I was wondering if anyone could advise me on whether this is correct? I have two tables, named CONTAINERS and BITS. Each CONTAINER can contain any number of BITs. I represent them as two classes: class Container{ private $bits; private $id; //...and a property for each column in the table... public function __construct(){ $this->bits = array(); } public function addBit($bit){ $this->bits[] = $bit; } //...getters and setters... } class Bit{ //some properties, methods etc... } Each class will have a property for each column in its respective table. I then have a couple of 'repositories' which handle things to do with saving/retrieving these objects from the database: //repository to control saving/retrieving Containers from the database class ContainerRepository{ //inject the bit repository for use later public function __construct($bitRepo){ $this->bitRepo = $bitRepo; } public function getById($id){ //talk directly to Oracle here to all column data into the object //get all the bits in the container $bits = $this->bitRepo->getByContainerId($id); foreach($bits as $bit){ $container->addBit($bit); } //return an instance of Container } public function persist($container){ //talk directly to Oracle here to save it to the database //if its ID is NULL, create a new container in database, otherwise update the existing one //use BitRepository to save each of the Bits inside the Container $bitRepo = $this->bitRepo; foreach($container->bits as $bit){ $bitRepo->persist($bit); } } } //repository to control saving/retrieving Bits from the database class BitRepository{ public function getById($id){} public function getByContainerId($containerId){} public function persist($bit){} } Therefore, the code I would use to get an instance of Container from the database would be: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = $containerRepo->getById($id); Or to create a new one and save to the database: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = new Container(); $container->setSomeProperty(1); $bit = new Bit(); $container->addBit($bit); $containerRepo->persist($container); Can someone advise me as to whether I have implemented this pattern correctly? Thanks!

    Read the article

  • Want to make RTS strategy game, good starting point...?

    - by NOLANDI
    everybody! I find this community because I did research about game developing topics. So I decide to register on this community and ask for your opinion. I am not begineer in computer stuffs, and I know logic of programming language. I have little knowlegde of C, C++, Started to learn C#...but I was always oriented about other areas of IT. Amateur for game making but not in IT general! I am interested to make RTS strategy video game, isometric camera view (like PC GAME COMMANDOS for example)...so I started to read many books about DirectX, XNA, OpenGL, C#, AI programming...so I want to know where is good starting point for making this game genre. I want to use C# for this project, not C++, since I always had trouble with it, because it is too complicated. I see good oportunity for learning C#, since it has more object-oriented concept, with much less code and easy readible code for writing. Beside of that I know that C++ is the more powerful language until today. So I want to use DirectX and C# for my project. But it will required more time for sure, to learn it. I have trouble with finding books for c# & DirectX (I found just one), but in every next book, it is programming in c++! I read many web sites and they mentioned Slim DX, and in generaly, other articles which I read tells that it is possible to make good game with combination c# and directX, but...I don't have enough information about all this stuffs, so I became confused! It is really hard to have wish to learn c++ from scratch, since I started c# and I like it really...I don't try OpenGL, but I am already familiar with DirectX (some beginning stuffs), and I think that it is good and interesting. So, I decide to choose between maybe Unity3d and XNA first. XNA...hm, it sound that it is not bad solution, but I heard that it is already overcome technology, and I think that it is better to wait with XNA, until I try to make something with DirectX and C#, at first. But I dont know it is possible to make strategy game in XNA?! Maybe I am wrong. I also, find Unity 3D game engine and I find that I could programming there in C#, but many all examples are written in Javascript (Digital tutors videos)...But I generaly have wish to learn it too, because it allows to export games to mobile phones, there a lot of documentations, it is free, etc... I want to find best way to learn C# and DirectX, and make this game, but I dont know if it's possible to combine?! But maybe is good to first try Unity and XNA, because I am beginner?! Thank you all! I am willing to hear your expirience.

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • Connecting Adium to Google Talk with a 2-factor authentication account isn’t working

    - by Robin
    Anyone else having this problem? After turning on 2-factor authentication on my Google Account I stopped being able to log in through Adium (Mac IM client that uses Pidgin’s libpurple for IM). Obviously you need to generate an application-specific password but these won’t let me log in. Application specific passwords work with other applications (e.g. Reeder for feeds and calendering on my phone). Google specifically mention Adium in their examples of setting up an application password for Google Talk so I doubt it’s a generic Adium problem. I can still access Google Talk for this account if I use a talk widget on a Google Website (Plus, or iGoogle for example). My bug report to Adium including a connection log file is up on their Trac: http://trac.adium.im/ticket/15310 . No activity there though. I also asked around in their IRC channel but no-one else could replicate the problem. If I had to guess then I’d think it was a consequence of me not having a GMail account associated with my Google account. I don’t see exactly why that would cause it, but it seems like a fairly unusual setup that might not have been tested for.

    Read the article

  • PHP-APC Installation

    - by Leo
    Trying to get my head around the way to install APC cache on PHP 5.3.13. That's a VPS with apache, configured preferably through whm/cpanel (although not only). I read a bunch of articles where it was suggested to use FastCGI with APC, as suPHP doens't do well with opcode caching, and fcgid_module doesn't do it right for APC either. Noted that fcgid_module is a newer package than FastCGI and that's what whm/cpanel installs for you but ok, that can be solved I guess. Then I'm reading that php-fpm is a much better alternative to manage the php processes, especially for APC. Ok. Then I realised that php-fpm is included in php core since 5.3 and got confused. Does that mean I don't have to use FastCGI/fcgid_module (and what should I use instead of them - mod_php or cgi?)? Or does that mean that I still need to get the older FastCGI module, and configure it to use one process per user (or just one process?)? Or would fcgid_module work as well? And how bad would it be just to go with mod_php/APC to avoid troubles of installing php-fpm and FastCGI (whm/cpanel doesn't support neither) given than Varnish would serve most of the static content anyway - no php process need to be created for static content. Any examples of their FastCGI/fcgid_module/php-fpm/APC configurations would be greatly appreciated as well.

    Read the article

  • What kind of “sysadmin stuff” should I show to students during a talk?

    - by Gregory Eric Sanderaon
    A teacher asked me If I could talk about my job as a linux sysadmin in his class. The course is called "Introduction to Operating systems" and i've been given 45 minutes to talk. The students are beginning their second year, so they've had a bit of experience with programming in different languages. What i'm like to do is show a series of hands-on examples of the kinds of things I do on a regular basis. I've already got a few ideas jotted down, but I'm afraid that they might be either too advanced or too simple for the students to appreciate. Another concern is that a topic might be too long to explain and use too much time overall. Here are a few ideas : Program deployment using version control (git in my case) filtering apache logs using grep, awk, uniq, tail A couple of bash scripts that i've made for various stuff on servers live montitoring (htop, iotop, iptraf) creating databases and assigning roles in mysql/postgresql So, are these ideas any good ? Do you have better ideas ? are the ideas too simple and should I go for more "advanced" stuff ?

    Read the article

  • local msmtp and ovh hosting

    - by klez
    I have my personal email hosted on OVH (personal hosting plan) and I'm not able to send mails using msmtp. Here's a typical session ignoring system configuration file /etc/msmtprc: File o directory non esistente loaded user configuration file /home/klez/.msmtprc using account default from /home/klez/.msmtprc host = ssl0.ovh.net port = 465 timeout = off protocol = smtp domain = localhost auth = choose user = federicoculloca%xxxxxxx password = * ntlmdomain = (not set) tls = on tls_starttls = off tls_trust_file = (not set) tls_crl_file = (not set) tls_fingerprint = (not set) tls_key_file = (not set) tls_cert_file = (not set) tls_certcheck = off tls_force_sslv3 = off tls_min_dh_prime_bits = (not set) tls_priorities = (not set) auto_from = off maildomain = (not set) from = federicoculloca@xxxxxxxx dsn_notify = (not set) dsn_return = (not set) keepbcc = off logfile = (not set) syslog = (not set) reading recipients from the command line TLS certificate information: Owner: Common Name: ssl0.ovh.net Organizational unit: Domain Control Validated Issuer: Common Name: OVH Secure Certification Authority Organization: OVH SAS Organizational unit: Low Assurance Country: FR Validity: Activation time: lun 31 gen 2011 01:00:00 CET Expiration time: mer 15 feb 2012 00:59:59 CET Fingerprints: SHA1: F9:DC:41:F9:A2:38:51:9B:56:E4:98:E6:CD:81:31:42:E6:0E:26:6D MD5: FC:EC:F3:8F:28:E4:7E:28:99:89:E6:BB:C9:DF:71:CE <-- 220 ns0.ovh.net ssl0.ovh.net. You connect to mail427.ha.ovh.net ESMTP --> EHLO localhost <-- 250-ssl0.ovh.net. You connect to mail427.ha.ovh.net <-- 250-AUTH LOGIN PLAIN <-- 250-AUTH=LOGIN PLAIN <-- 250-PIPELINING <-- 250-8BITMIME <-- 250 SIZE 109000000 --> AUTH PLAIN xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <-- 235 ok, go ahead (#2.0.0) --> MAIL FROM:<federicoculloca@xxxxx> --> RCPT TO:<[email protected]> --> DATA <-- 250 ok <-- 250 ok <-- 354 go ahead --> hello world --> . <-- 554 mail server permanently rejected message (#5.3.0) And my configuration # ~/.msmtp # Mostly from Peter Garrett's examples # https://lists.ubuntu.com/archives/ubuntu-users/2007-September/122698.html # Accounts from Scott Robbins' `A Quick Guide to Mutt' # http://home.nyc.rr.com/computertaijutsu/mutt.html account xxxxx host ssl0.ovh.net from federicoculloca@xxxxxx auth on user federicoculloca%xxxxxx password xxxxxx tls on tls_certcheck off tls_starttls off Any idea?

    Read the article

  • How to construct SELinux rules for a Glassfish server

    - by tronda
    I'm running Glassfish 3.1 on a CentOS 6 solution and by default SELinux is enabled. I have installed Sun's JDK version 1.6.0_29 on the server and extracted the Glassfish 3.1.1 to /opt/glassfish-3.1.1 with a link /opt/glassfish pointing to the latest Glassfish version. I've also created a system user named glassfish with a home directory /home/glassfish. When running with SELinux enabled I get all sorts of errors. For instance I'm not able to create the domain. I kind of like the concept of SELinux, and would like to be able to have SELinux enabled. I have the following requirements for the Glassfish server: Listening to port 8080 and 8081 Other ports 7676: JMS 8686: JMX monitoring, 4848: Admin console Forwarding from apache to Glassfish through mod_jk and port 8009 Starting OpenMQ as an separate process which listens to 7676 and it's JMX monitoring port 7776 Able to read and write files at a specified area (different from home directory) Able to use /tmp/ for temporary files I am aware of the audit2allow tool when running in permissive mode, but I struggle with understanding the rules that is generated from this tool, and thought that setting up these rule manually the first time would help me understand the SELinux rules better than the simplistic examples that I've seen so far. Can someone with SELinux experience help me form these SELinux rules with comments describing each part of the rules?

    Read the article

  • Apache log rotation: logrotate vs rotatelogs vs chronolog

    - by Enrico
    I have been researching log rotation for my server which hosts ~5 fairly high traffic sites. From what I can tell, my options are to use logrotate or to use piped logging with either rotatelogs or chronolog. logrotate requires a restart of apache and both SIGHUP and SIGUSR1 restarts are less than ideal on high traffic sites, because either you drop a bunch of connections or you need to delay compressing the old log until all child processes have died naturally. Also, downtime can be quite significant if compression is enabled. Would using logrotate - without compression and with graceful restart - and compressing old logs after the fact be the best way to minimize downtime? chronolog and rotatelogs sound promising, but are not well documented. I couldn't find examples of using either in combination with vhost specific logs. The chronolog website says, "when the expanded filename changes, the current file is closed and a new one opened". Is this globally? Or is that per AccessLog, CustomLog or ErrorLog directive? Is there a significant difference between chronolog and rotatelogs?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • unlinked libraries in a makefile?

    - by wyatt
    I'm trying to install a libspopc, but when I run the make I get the following output: cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c session.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c queries.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c parsing.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c format.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c objects.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c libspopc.c rm -f libspopc*.a ar r libspopc-0.9n.a session.o queries.o parsing.o format.o objects.o libspopc.o ar: creating libspopc-0.9n.a ranlib libspopc-0.9n.a ln -s libspopc-0.9n.a libspopc.a rm -f libspopc*.so cc -o libspopc-0.9n.so -shared session.o queries.o parsing.o format.o objects.o libspopc.o ln -s libspopc-0.9n.so libspopc.so cc -o poptest1 -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL examples/poptest1.c -L. -lspopc -lssl -lcrypto /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_globallookup': dso_dlfcn.c:(.text+0x2d): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x43): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x4d): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_pathbyaddr': dso_dlfcn.c:(.text+0x8f): undefined reference to `dladdr' dso_dlfcn.c:(.text+0xe9): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_func': dso_dlfcn.c:(.text+0x491): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x570): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_var': dso_dlfcn.c:(.text+0x5f1): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x6d0): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_unload': dso_dlfcn.c:(.text+0x735): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_load': dso_dlfcn.c:(.text+0x817): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x88e): undefined reference to `dlclose' dso_dlfcn.c:(.text+0x8d5): undefined reference to `dlerror' collect2: ld returned 1 exit status make: *** [poptest1] Error 1 A quick search suggested that this was due to libdl being unlinked, though this seems unlikely in a distributed library, particularly a seemingly relatively popular one. Could anything else be causing this? And if it is due to an unlinked library, how would I go about fixing it? Thanks

    Read the article

  • Tomcat 6 Windows Server 64 Redirect Connector Fails

    - by Rafe
    So is there some problem with running the Tomcat connectors under a 64 bit windows OS? Here's my configuration: Windows Server 2003 64 bit Intel Xeon Tomcat 6.0.26 JVM 1.6.0 (64bit) ISAPI Redirect Connector 1.2.30.0 (64 bit) Calling the IP address of the site with :8080 brings up the tomcat page so I know that's running and the examples all work so its obviously not having a problem with the JVM. Calling the site ip on port 80 however gives me error 324 - looking at the application log on windows shows "Could not load all ISAPI filters for site/service. Therefore startup aborted". The ISAPI filter page under the web site properties shows the status of this filter to be down with a red arrow. The ISAPI filter name is jakarta and there is a corresponding virtual directory set up in the root of the site pointing to the same directory as the filter. The jakarta web service extension is also pointing to the required dll (c:\program files\apache software foundation\jakarta isapi redirector\bin\isapi_redirect.dll). Incidentally, this same problem occurs when trying to use Tomcat 5.5. I've also tried swapping out various redirect versions. It's really odd because I got it to work once with a version of the redirector that came with Plesk but I've since uninstalled everything to do with plesk and even trying to use the plesk-compiled dll doesn't work now. I am pulling my hair out on this, any ideas?

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • "chown mysql:mysql /data/tmp" command

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. If there is a MySQL installation on a Ubuntu machine, I saw some people doing the following thing: sudo chown mysql:mysql /data/tmp I get confused, I know the meaning of the above command, which is to change the owner of /data/tmp to user 'mysql' and change the group of it to 'mysql' group. But (my questions): 1. Why would one run the above command? If I create a table in my_db database, by default, there will be .frm, .MYD, and .MYI files (data files) be created automatically by MySQL under /var/lib/mysql/my_db/ . So, does the above command changes the default MySQL data directory to /data/tmp/ instead of /var/lib/mysql/my_db/? Basically, I would like to know the purpose and effect of the above command. (better with examples) 2. Where does the 'mysql' owner and group come from? Does the installation of MySQL on a Linux machine automatically create the 'mysql' user and group? or People need to manually create a mysql account for the linux machine?

    Read the article

  • Looking for cheap Wireless router, with USB for attached USB disk drive

    - by geoffc
    I have a 802.11b router at home (DLink DI-614+ (B rev)) and it is working perfectly well for me. I want to replace it though, since it is out of updates, and now several years old and heck, I want a new toy to configure! I was trying to decide what to get. I could care less about 802.11g or n support, since B is fast enough, but every device in my house is now B/G, so G would be fine for me. N buys me little to nothing. (Small enough house that range is a non-issue). The features I realized I want are a USB port for sharing a USB hard drive. I would like to have a central device I could store files on. I do not want to waste the power of an always running PC to do this, so a router seems like the place to go. I would love it, if it could support Vonage VOIP as well, then I could ditch a power brick from a second device (I have the small DLink Vonage VOIP box). All the current examples of this router (with USB drive, yet to find one with VOIP too!) are in the $100+ range, and N and silly, when a B/G is in the $30 range around here.

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • How does Subnetting Work?

    - by Kyle Brandt
    How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. What is classless routing and why is class-based routing obsolete? If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? Not looking for links to other sites (unless maybe you have one post with a bunch of good ones). I already know how to subnet, I just thought it would be nice if Server Fault had a generic subnetting answer.

    Read the article

  • How to setup Calendar permissions for group to group

    - by Sorean
    I've been scouring the internet and so far have only been able to find examples of how to grant calendar permissions from one user to another using the Add-MailboxFolderPermission command. This is great and it was okay for when they only had a handful of users. But going forward it's not realistic to have to set individual calendar permissions for all calendars for each new user. Layout of security groups already created. Each group has a few people assigned to it. Techs Managers Admin What I am trying to accomplish is set it up so that anyone that belongs to the Managers group can view the calendars of the Tech group. Admins can view and edit the Tech group. I've found an example of adding just the security group name but I get an error of: [PS] C:\Windows\system32add-MailboxFolderPermission -Identity Techs:\Calendar -User "Admin" -AccessRights Owner The user "Admin" is either not valid SMTP address, or there is no matching information. + CategoryInfo : NotSpecified: (0:Int32) [Add-MailboxFolderPermission], InvalidExternalUserIdException + FullyQualifiedErrorId : 39352699,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Am I creating groups wrong? Am I using the wrong commands? Any guidance would be greatly appreciated.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >