Search Results

Search found 937 results on 38 pages for 'al akhfiya'.

Page 27/38 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • HRC BEST PRACTICE TOUR: la tappa di Roma del 28/05

    - by Claudia Caramelli-Oracle
    Guest post by Paola Provvisier, Master Principal Sales Consultant - Oracle 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Presso la Banca del Mezzogiorno - Mediocredito Centrale, il 28 maggio scorso, si è svolto l’incontro dal titolo Compensation & Benefit – Welfare Aziendale, organizzato da HRComunity Academy nell’ambito della sua iniziativa HRC BEST PRACTICE TOUR e sponsorizzato da Oracle. La giornata ha visto protagonisti alcune grandi realtà bancarie e industriali con la partecipazione di circa 30 specialisti dell’area HR - Compensation & Benefit. Gli interventi che si sono succeduti nell’ambito della giornata hanno avuto come tema la condivisione delle best-pratice e delle iniziative in corso tra le aziende intervenute, con particolare riguardo alle proposte in merito al tema ‘Flexible Benefit’. Oracle, quale sponsor della giornata, ha introdotto con una Technical Overview gli attuali scenari del mercato del lavoro e le evoluzioni tecnologiche sulla piattaforma Oracle HCM Cloud, con particolare riguardo alle innovazioni a supporto dei temi della Compensation. La giornata, che ha suscitato apprezzamento e vivace interesse da parte di tutti i partecipanti con una proficua e partecipata sessione di domande e risposte a seguito dei vari interventi, si è conclusa con un piacevole buffet. Altre foto dell'evento sono presenti sulla Pagina Facebook di HRC.

    Read the article

  • Oracle Applications Strategy Day

    - by Oracle Aplicaciones
    Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} Oracle y ESIC celebraron el pasado 8 de Noviembre la última edición del Roadshow Oracle Applications Strategy Day, que visitó las ciudades de Barcelona, Madrid, Valencia y Sevilla, en colaboración con nuestros partners: Arin Innovation, GFI, Golive, Neteris, Oracle+Cerca, Qualita, SDG Consulting, Steltix, Steria, Tactic, Vass. En el encuentro se evaluó el impacto de los cambios en el negocio, el aumento de la volatilidad de la información y las últimas tecnologías en el marco actual. Con la exclusiva modalidad de ponencias + coloquios + asesorías individuales, todos los asistentes dispusieron de la posibilidad de compartir experiencias y mejores prácticas de la mano de expertos del sector así como con los asistentes al encuentro. A través de los siguientes links podrá acceder a las presentaciones. - La Compañía del Futuro: Nuevas tecnologías y su integración con el Marketing y la Estrategia corporativa. Profesor D. Javier G. Recuenco. - Estrategia de Aplicaciones para PYMES. D. Ricardo Martinez, Director de Aplicaciones para el midsize market Video resumen del evento: 

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • Cannot boot into system after deleting partition

    - by Clayton
    Okay...so this was kind of a stupid thing for me to do now that I think back on it. I was experiencing a ton of lag and not as much memory that I could use after installing Ubuntu 12.04. So after remembering I had installed multiple server versions of Ubuntu 12.04 by mistake, I went into Disk Management and proceeded to delete each and every one. Everything went fine. Up until this week, I have not experienced any problems. But starting yesterday I began to get lag just as I had before, and nothing fixed the problem. I decided to remove the Ubuntu partition, since I was also experiencing a visual error when given the option to select one to boot(the screen doesn't come up at all, and I recieved a monitor resolution error instead, but could still access both Windows and Ubuntu via arrow keys). After deleting the Ubuntu partition, so that I could see if running just Windows would fix the problem, I proceeded with what I was doing, installing a few programs that were not tied to my prediciment in any way. Upon rebooting my desktop, however, I recieved the following error: error: unknown filesystem. grub rescue> Hoping I could boot into Ubuntu via a pendrive and possibly backup my important files and wipe the hard drive to start fresh, I installed Ubuntu 13.04, but even that does not boot. Instead, I get this message on a terminal screen: SYSLINUX 4.06. EDD 4.06-pre1 Copyright (C) 1994-2011 H. Peter Anvin et al ERROR: No configuration file found No DEFAULT or UI configuration directive found! boot: So more or less, my desktop is screwed. I need to be able to get to the files inside because of my job as an artist, as well as retrieve my documents for my stories stored on Windows. Once I can succeed in solving this once and for all, I know for a fact I will stick to Ubuntu only, and install what is required to be able to run any Windows applications I used to use or need to use. I would rather not reformat the hard drive, and if I need to, it is a last resort. And I doubt I can use a Windows Recovery Disk to get my files back, as my mom has thrown out a lot of the installation disks and paperwork I would need to even follow through with that. :\ Keep in mind that I am a novice/newbie when it comes to Linux, but am hoping ot become better at it as time goes by. I appreciate any help you guys can give me. This will probably be the last time I attempt to do anything that could risk the well-being of my PC. (I've also looked through various questions on the site and tested a lot of the solutions. None seem to have worked.)

    Read the article

  • Splitting an MP4 file

    - by Asaf Chertkoff
    what is the fastest and less resource consuming method for splitting an MP4 file? @Alex: it didn't work, i don't know why. see the out put here: asafche@asafche-laptop:~$ ffmpeg -vcodec copy -ss 0 -t 00:10:00 -i /home/asafche/Videos/myVideos/MAH00124.MP4 /home/asafche/Videos/myVideos/eh.mp4 FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 31 2011 18:53:20, gcc: 4.4.3 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/asafche/Videos/myVideos/MAH00124.MP4': Duration: 00:15:35.96, start: 0.000000, bitrate: 5664 kb/s Stream #0.0(und): Video: h264, yuv420p, 1280x720, 59.94 tbr, 59.94 tbn, 119.88 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to '/home/asafche/Videos/myVideos/eh.mp4': Stream #0.0(und): Video: libx264, yuv420p, 1280x720, q=2-31, 90k tbn, 59.94 tbc Stream #0.1(und): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Unsupported codec for output stream #0.1 it says something about different frame rate...

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • OSX - User home directories shared via NFS

    - by Hugh
    Hi, I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OSX, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OSX workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OSX. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OSX is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users Thanks

    Read the article

  • Unable to install ffmpeg-php

    - by matt74tm
    I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • Unable to install ffmpeg-php

    - by matt_tm
    Hi, I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • Site to Site VPN with Fault Tolerence

    - by Nordberg
    Hello, I have a situation where I require an IPSEC tunnel between two sites. Site 2 is a small branch office with basic (ADSL) connectivity and Site 1 is the "main" office with SDSL and ADSL for redundancy should the SDSL fail. From Site 1, all traffic bound for the 172.0.0.0 network will then be sent down another IPSEC tunnel to a supplier's Remote Server. See this page for the basic premise (this is a rough idea and things can be moved about etc...) I am considering specifying Cisco ASA devices as the firewalls for both sites for all connections. Would it be possible to employ something like HSRC to provide a backup at Site 1 should the SDSL go down? I suppose the key aims here are that Site 2 can somehow failover to initiate a VPN to the ASA behind the ADSL at Site 1. I will have a 21 subnet mask on all internet connections so can play with Class C routing if need be... If I'm barking up the wrong tree with HSRC, is there another way I can acheive this without massive expenditure on Barracuda routers et al? Many Thanks.

    Read the article

  • Windows 7 install detects SSD but doesn't list it to install to

    - by Mohamed Meligy
    I'm having quite a weird problem when trying to install Windows 7 SP1 on a new Corsair Force Series 3 SSD to replace a failing HDD in my wife's laptop. When I boot to Windows install, it shows that I have no disks to install to, and tells me to find it a driver to any custom disks I may have. When I go to repair option on the first install window, and then open command prompt Window, I can see the disk using diskpart, and can partition it and format partitions, and then later access them from command prompt and copy files to them. After creating partitions, clicking the "browse" button in Windows install screen that shows no disks available to install Windows to, does show the partitions created by diskpart! So, it does detect the disk and partitions, but refuses to list them as options to install to. People on the Interwebs seem to suggest that just running diskpart "clean" solved the issue for most people, just creating an "active" "primary" partition is al most tutorials suggest. Both got me only as far as described above. The BIOS doesn't have RAID option, changing between "ATA" and "AHCI" (the only available options) didn't make any difference. Might be worth mentioning that this is on a laptop that has Sata III controller for main drive (which I connected the Sata3 SSD to), and Sata II for DVD (which I used for Windows install media). That's what googling brings at least (DELL XPS 15 L502). Any ideas? . Update: The SSD is 460 GB. I tried setting it all as one partition and creating 70-90 GB partition as well (NTFS). More importantly, Windows doesn't list the partition as one it cannot install to (which it does with disks in general when they are small for example). What happens here is different. It doesn't list anything at all. It shows empty list of drives.

    Read the article

  • snort with barnyard2 not working on Fedora 12

    - by aHunter
    Has anyone come across this error with barnyard2 and snort? --== Initializing Barnyard2 ==-- Initializing Input Plugins! Initializing Output Plugins! Parsing config file "/etc/snort/barnyard2.conf" Log directory = /var/log/barnyard2 database: compiled support for (mysql) database: configured to use mysql database: schema version = 107 database: host = localhost database: user = test database: database name = snort database: sensor name = localhost:eth0 database: sensor id = 1 database: data encoding = hex database: detail level = full database: ignore_bpf = no database: using the "log" facility --== Initialization Complete ==-- ______ -*> Barnyard2 <*- / ,,_ \ Version 2.1.8 (Build 251) |o" )~| By the SecurixLive.com Team: http://www.securixlive.com/about.php + '''' + (C) Copyright 2008-2010 SecurixLive. Snort by Martin Roesch & The Snort Team: http://www.snort.org/team.html (C) Copyright 1998-2007 Sourcefire Inc., et al. WARNING: Ignoring corrupt/truncated waldofile '/var/log/snort/barnyard.waldo' Opened spool file '/var/log/snort/snort.log.1282004944' ERROR: Unknown record type read: 104 Fatal Error, Quitting.. Snort seems to be working correctly as I have managed to get logs via syslog but when I try to use the barnyard config via Unified2 it is not working. Presumably because of the above error. Thanks in advance.

    Read the article

  • Problem running mysql client, cannot connect to mysql server

    - by ehsanul
    Edit3: Thanks for the help everyone. Sorry for wasting anybody's time, but it seems like a simple reboot solved it. I should've known better, but I just had the assumption that the "restart" solution is mostly valid just for MS Windows (no offense). I'll keep this in mind before I ask a question here again. I installed the mysql-client-5.0 and mysql-server-5.0 packages on Ubuntu 8.04, using sudo apt-get install. When I try to run the "mysql" command, I get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) To verify that mysql server is running, I tried this, and it does seem to be running, with the correct socket too: $ ps aux | grep mysql root 13388 0.0 0.0 1772 528 ? S 06:24 0:00 /bin/sh /usr/bin/mysqld_safe mysql 13553 0.0 1.4 127012 15332 ? Sl 06:25 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 13555 0.0 0.0 3008 696 ? S 06:25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld ehsanul 16910 0.0 0.0 3092 772 pts/4 R+ 07:17 0:00 grep mysql So I don't understand why I'm getting an error trying to connect to mysql server. Note that I'm completely new to mysql. Edit: As requested in comments, the exact command that is returning the error is simply "sudo mysql". And when I check netstats for active networks services, I do see an entry for port 3306, with Protocol: tcp, IP Source: 127.0.0.1, State: LISTEN Edit2: It appears as if the /var/run/mysqld/mysqld.sock socket doesn't exist (if I'm interpreting the following output correctly): $ ls -al /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql root 40 2009-08-06 06:36 . drwxr-xr-x 20 root root 860 2009-08-06 06:25 ..

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Chmod 644 on /etc/ any way to fix?

    - by DazSlayer
    I tried to tab complete something and I guess it wasnt there. I know you are not supposed to set the permissions to /etc/ like that, but my permissions seem to be all messed up. whoami prints out cannot find name for user ID 1002 and I cannot cd into /etc/ anymore. passwd and shadow use 640 and 644 so I am not sure why this is a problem. Regardless, is there any way to fix this? The command run was sudo chmod 644 /etc/ I have no name!@vpn-server:/$ whoami whoami: cannot find name for user ID 1002 I have no name!@vpn-server:/$ cd etc bash: cd: etc: Permission denied I have no name!@vpn-server:/$ ls -al etc d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. d????????? ? ? ? ? ? acpi -????????? ? ? ? ? ? adduser.conf I have no name!@vpn-server:/$ sudo su sudo: can't open /etc/sudoers: Permission denied

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Configure Windows firewall to prevent an application from listening on a specific port

    - by U-D13
    The issue: there are many applications struggling to listen on port 80 (Skype, Teamviewer et al.), and to many of them that even is not essential (in the sense that you can have a httpd running and blocking the http port, and the other application won't even squeak about being unable to open the port). What makes things worse, some of the apps are... Well, I suppose, that it's okay that the mentally impaired are being integrated in the society by giving them a job to do, but... Programming requires some intellectual effort, in my humble opinion... What I mean is that there is no way to configure the app not to use specific ports (that's what you get for using proprietary software) - you can either add it to windows firewall exceptions (and succumb to undesired port opening behavior) or not (and risk losing most - if not all - of the functionality). Technically, it is not impossible for the firewall to deny an application opening an incoming port even if the application is in the exception list. And if this functionality is built into the Windows firewall somewhere, there should be a way to activate it. So, what I want to know is: whether there exists such an option, and if it does how to activate it.

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Zend Optimizer not Functioning Correctly on Plesk 9.3.0 VPS

    - by dallasclark
    I have a new VPS running Plesk 9.3.0 without 'much' modifications to any settings. I've moved a site to this VPS and I'm receiving a page full of random 'gibbrish' characters like: Zend2003120702116268102798xù Ÿ2½}MŒ%ÇqæCwËg¸„ÖXXZ[ÆùÿCK¢FŠäš’(’¢-ÂÒèu¿zš6gºÇÝ=$Ec:-xá=èàƒÃ ôžL/`,¼'û$èdû$ð ›±OYïUUdfde½á›GâcWTfDdF|‘™‘QÕ_nN‡OÝ›Ÿ/ú9¾¢»"…çÎ =B³øo/=÷…?úúW?·/LX5¯ß½ ðtEÍ ãB„ð÷øìÞéåU®•òÊëZÈi^¿lN/NÎNoÞ›/šÅC׸”šÅLËÏåùÉ+Ü á¸a6Ê÷Ž..ϯrç…Õ–)Õþñòüvsz•{å mî!F³ã[çWsÖZ%k'-ÐÝ<¬þZ1B¡¼ "-ÏîH @/Ü´b.Ï›ù"ü tb¼Ò!”]œ¼ïŠ6–Ál \Ü;½hÎOößh®^“4#…s¡CÀ†æôUèP³Ð§3¦¬“; –j‡ìþb¤÷š»¶³Wçç7÷îÜ…w•bÞs«[ÆÎav,@ÿ´ÜéÖåÌfž¯þVÚlö‹½ÎÛØå#Èoòudñ^÷чW+ÕSsÐý¹w˜7Ÿò«{ò…?<Ìo1»èZÄN_ð³»·îqr÷Vs¾"ýµ¾§þˆ¡v Ù.j†Çï®#{îÞüÞú¿ºý²Q0âLõ$rv¥{»[à|sÝwxþðúy¯)þ • 7ÛŽ È^YËZá‘JV<|·g“l2£{µ«Ù›=é§eCÍîõÖ»ÓÖQtL´D?ε܃ÁªÇ3=ﯸ^=þAIÏjöÐÁ0¡ò¥ 2øÙŸÞçÝÊéqÔ€Lï÷*+Jo¬õLͺFøì x¨ÕìÛ'GH“æådD)ÿ:¨5¼q±¦rÖøLf“Ðj îÅõ¬éa÷[!_zöN?þ"™†á©›0Ý{ˆWóª‘ÁH4µx5+Ë^–Ž›·ÉöŠd1¹Õ¬ phpinfo() shows PHP is running on the Zend engine. This server is unmanaged so I cannot ask the hosting provider for assistance. Any help big or small will be appreciated. [root@vps ~]# php -v PHP 5.1.6 (cli) (built: Mar 31 2010 02:39:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • cygwin sshd fails to allocate pty for some users

    - by user115851
    I have (finally) got sshd working under cygwin on Win7 - well, sort of. The sshd runs as user 'cyg_server'. I'm able to successfully ssh to my computer using that same user name. However, if I attempt to ssh using my normal (Windows) user name, it fails trying to allocate a pty for my login session. For example, output of 'sshd -D -d -d -d' contains this .. ... debug1: Entering interactive session for SSH2. debug2: fd 4 setting O_NONBLOCK debug2: fd 5 setting O_NONBLOCK debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug2: session_new: allocate (allocated 0 max 10) debug3: session_unused: session id 0 unused debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pty1 !!! chown(/dev/pty1, 17308, 10513) failed: Invalid argument debug1: do_cleanup debug1: session_pty_cleanup: session 0 release /dev/pty1 Currently /dev is owned by my normal account. I've tried changing its ownership to cyg_server as well as SYSTEM. In both cases the problem persists. I've also changed permissions for /dev (e.g, 700 and 777) - again problem persists. [As a side note - it is strange that whenever I do 'ls -al /dev' the ptys do not show up. However, if I 'ls -l /dev/ptyX' for a pty I know to exist, it shows up. Is that normal for cygwin?] -Bob Andover, MA

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >