Search Results

Search found 502 results on 21 pages for 'outdated'.

Page 15/21 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Media keys play/pause globally worked in 12.10, not in 13.10

    - by Stéphane Gourichon
    Laptop media keys On Asus n55sf laptop, there are a dedicated keys for volume up, volume down, mute, [play/pause], stop, launch (plus a dozen Fn-key combinations). In 12.10 most worked. (Overall is seems unrelated to desktop environment used, stating it for the sake of completeness.) On Ubuntu 12.10 under XFCE they just worked. That is: when a player like rhythmbox or totem was started, it would alternate between play and pause. Interestingly, if several were started, they would alternate independently. E.g. use mouse to pause rhythmbox, launch totem, and one hit on [play/pause] key would pause one and resume the other. Keys Next,Previous and Stop worked as expected in any program. In 13.10 most still work, but play/skip related ignored. On Xubuntu 13.10 (XFCE too) the volume keys work but the [play/pause], stop, next and prev are ignored. Not tried regular Ubuntu 13.10 (Unity). Search before you ask Here are a few facts: https://wiki.ubuntu.com/Hotkeys/Architecture is ummutable and mentions Ubuntu 9.10. https://wiki.ubuntu.com/Hotkeys/Troubleshooting is also outdated as it mentions /usr/share/doc/udev/README.keymap.txt which no longer exists. On 12.10 and 13.10 versions, at XFCE level (as visible by xfconf-query or using xfce4-settings-manager) there are a couple of shortcut for keys like XF86Calculator or XF86TouchpadToggle but nothing related to volume prev/next/play/stop, which is okay. XF86Audio substring doesn't appear in /etc (which is normal) Kernel-level test: "showkey -s" on console shows that keys Next,Play/Pause,Previous,Stop are keycodes 163,164,165,166. Nothing relevant in /etc about that. Reports https://bugs.launchpad.net/ubuntu/+source/udev/+bug/1072371 and https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1012365 suggest to adjust at udev level. Alas, the udev tutorials I found ( e.g. https://wiki.debian.org/udev ) don't even mention keyboard. A thread in french seems to deal with a similar issue: https://forum.ubuntu-fr.org/viewtopic.php?id=1395051. @sudo evtest /dev/input/event3@, in X as well as on plain console, reports events on key pressed and repeats, but nothing when pressing those media keys. Is udev a dead end ? Questions How did it work in 12.10 ? Through udev ? Something else ? Any other hint ?

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • MSFT new trick to promote IE9 by kill IE6 first.

    - by anirudha
    Every developer know every issue on development for IE6 whenever they know things more. they are frustrated whenever they spent time in IE6 for making application cross browser compatible. not long time ago MSFT make a campaign save IE6 you can find the reference http://blogs.msdn.com/b/anna/archive/2009/04/01/save-internet-explorer-6.aspx and the webstite is here http://www.saveie6.com/ well they really make joke see what they write on the page. well why website maked in PHP whenever they can make them in asp.net or any other technology who reflect the Microsoft technology see here  http://www.saveie6.com/compare.php High security (many updates) :- you can find IE6 is how much secure you can also read Wikipedia for know. well i can say IE6 is very easily to hack. wikipedia tell you about that here http://en.wikipedia.org/wiki/Internet_Explorer_6 and for know about the security watch here http://www.google.co.in/webhp?hl=en#sclient=psy&hl=en&site=webhp&q=ie6+security+issues Lightweight (no support for silly PNG transparency, etc) :- well they tell PNG silly but tell me about the best format on internet. their is no better option as png or SVG. More screen space thanks to no tabs:-  they tell this nonsense without think anything. if they really care about more screen space why they make tab  in 7,8,9. conclusion:- IE team make a research on how to promote IE9 better then they can beat chrome and Firefox. because IE9 not have anything good like customization , plug-in ,add-ons , personas , themes and many other thing like chrome and Firefox provided perhaps IE is outdated thing even everyone their can writing about these days that IE9 have this, have performance better then this… the main problem in IE is IE6. many developer hate them because many of their time goes for making site cross browser compatible. in 2009 they still have no blah like IE9 who they have today so they make a campaign for save IE6. the list they make is a joke. they show that everything in IE6 is perfect even everyone know the truth. they listed IE6 is high security. in 2011 their is a problem for IE9 promotion called IE6. because developer hate IE6 how they can promote IE9 very well. so destroy IE6 is only option for IE9 make promote better. so you can see they make two different different campaign and both are opposite of other. well  how we can believe in IE9. thanks for reading this post. what you thinking on it. have a idea or feedback reported them.

    Read the article

  • Full disk encryption with seperate boot and encrypted keyfile storage: Two-Form Authentication

    - by Cain
    I am trying to setup true Full Disk encryption with two-form authentication on 12.04 and can not find out how to call a keyfile for the encrypted root out of another encrypted partition. All documentation or questions I am finding for whole or full disk encryption only encrypts separate partitions on the same disk. This is not what most are calling full disk encryption, /boot is not on a partition on the root drive, rather it is on a usb stick as sdx1. Instead root is on a logical partition on top of a LUKS container. Luks is run on the whole disk, encrypting the partition table as well. All drives in the machine are completely encrypted and to open it it requires a USB drive (what I have) as well as a passphrase (what I know) resulting in Two-Form Authentication to boot the machine. Device sdx cryptroot vg00 lvroot / There is no passphrase to open the encrypted root device, only a keyfile. That keyfile is kept on the usb drive with /boot, in its own encrypted partition (I'll call this cryptkey). In order for the root file system (cryptroot) to be opened, initramfs must ask for the passphrase to cryptkey on the usb drive, then use the keyfile inside that to open cryproot. I did manage to find what I think is the how-to I used to do this once before: http://wiki.ubuntu.org.cn/UbuntuHelp:FeistyLUKSTwoFormFactor I already have the system installed and can chroot into it, however, I can not get it to call for the keys on the USB during boot. I did find a how-to saying I needed to make a cryptroot conf for initramfs but, I believe that is for a passphrase: https://help.ubuntu.com/community/EncryptedFilesystemLVMHowto#Notes_for_making_it_work_in_Ubuntu_12.04_.22Precise_Pangolin.22_amd64 I also tried to setup crypttab. However, crypttab only works for drives mounted after the root drive as calling for a keyfile on a device not yet mounted to the system doesnt work. The Feisty how-to included scripts that would be run during boot instructing initramfs to mount the usb drive temporarily and call the keyfile for root which worked quite well except those scripts are outdated now, many of the things they relied on have been merged into something else, changed, or simply don't exist anymore. If I have missed a clear how-to for this, that would be wonderful, I just don't think I have.

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • "ODM" - One of the Support team's most valued acronyms

    - by graham.mckendry(at)oracle.com
    If you submit technical service requests (SRs) through the My Oracle Support portal, you may often see the term "ODM" used in updates from our Support team. ODM is an acronym for "Oracle Diagnostic Methodology", which defines a standard problem solving approach that all of Oracle Support uses for every technical SR. ODM provides a number of benefits to the SRs - both for the Support organization and for the customer - including a consistent approach, higher quality, justified solutions, and ultimately faster resolution. Screenshot: Example of an ODM "Issue Clarification" activity in a service request The Oracle Diagnostic Methodology applies to both categories of technical SRs: Consultative (question-answer topics) and Problem-Solution. There are a few KM Notes that describe the steps of ODM, however to keep things simple (and since those KM Notes appear to be a bit outdated), I'll summarize the ODM stages here as follows: Consultative ODM - Three mandatory stages: ODM Question: Clarification of the customer's exact question. ODM Answer: Thorough answer to the customer's question. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. Problem-Solution ODM - Eight mandatory stages: ODM Issue Clarification: Clarification of the reported issue, including the symptoms, the steps to reproduce, and an outline of the business impact ODM Issue Verification: Confirmation of the issue being verified based on proof provided by the customer, such as screenshots, log files, or reproducing the issue during an Oracle Web Conference. ODM Cause Determination: Succinct outline of the root cause of the issue. ODM Cause Justification: Explanation as to why the root cause applies to this particular situation. ODM Proposed Solution(s): Succinct outline of the potential solution(s) to resolve the issue. ODM Proposed Solution(s) Justification: Explanation of why the proposed solution(s) will in fact resolve the issue. ODM Solution Action Plan: Detailed numbered instructions on how to execute the proposed solutions. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. During these stages, you may see other optional ODM-related activities such as "ODM Data Collection", "ODM Action Plan", "ODM Research", and "ODM Test Case". Again, these structured tags help ensure a uniform methodology across your SRs. With this knowledge you should be able to develop better predictability of what's coming next in your SRs, as well as what you can do to help expedite the resolution process.

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Remote Desktop - remote computer that was reached is not the one you specified

    - by Jim McKeeth
    We just setup some new Windows 2008 R2 servers and we are unable to Remote Desktop into them from our Windows 7 desktops. Remote desktop connects, but after we provide credentials we get: The connection cannot be completed because the remote computer that was reached is not the one you specified. This could be caused by an outdated entry in the DNS cache. Try using the IP address of the computer instead of the name. If we connect from Windows 7 to a machine not running Windows 2008 R2, or from a machine not running Windows 7 to the Windows 2008 R2 server, it works fine. Likewise if we connect to the Windows 2008 R2 server from Windows 7 via the IP address then it works fine (although that causes other problems later). I've only found one other mention of someone having this problem, so I don't think it is just our network. Any suggestions on how to connect from Windows 7 to Windows 2008 R2 via DNS? Both are 64-bit. Update: Turns out it does not need to be R2 to get the error. We have another server that is Windows 2008 R1 64-bit that also fails.

    Read the article

  • Legacy non-dpi-aware application resolution scaling?

    - by Miles Erickson
    Our environment prominently featuers an outdated but absolutely mission-critical Win32 application that is not dpi-aware. It is optimized for an 800x600 display. Most of our users now have 17"-20" displays with native resolutions ranging from 1280x1024 to 1680x1050. However, they still operate these displays at 800x600 because the text in this legacy application is otherwise too small. Of course, it also means that nothing quite fits on the screen in Office 2007. Most of our workstations still run Windows XP, but some are on Windows 7 and there are more to come. About one-third of our users run the app remotely via MS Terminal Services, and the remainder run it locally. Is anyone aware of any method that could be used to scale this specific application to about 170%, so that it would fill a 1280x1024 screen, without affecting other applications that work best at the display's native resolution? I know how to do this in Mac OS X, but I have never found a way to do it in Windows. Of course, this ideally would be something that we could push out via Group Policy. I suppose we even could create a custom MSI package to re-deploy the legacy application with some sort of display virtualization layer, if such a thing exists.

    Read the article

  • Advice on migrating from a Samba PDC

    - by pgb
    When we started our software development company, we decided to use Samba as a PDC for the few Windows workstations we had. We use Samba with OpenLDAP, and it has been a good replacement for AD for almost 6 years now (using Windows XP workstations). Now I'm facing a few problems with our setup: The Linux server where the PDC runs is very outdated (and is a Gentoo install, don't ask why!) We started using Windows 7 on some of the workstations, and these can't join the Samba domain (there's a workaround, I know) Our company has grown a bit, and we have now about 20 workstations (and plan to have more in the near future). I have to reinstall our PDC, and was thinking on updating to another Linux distro and the latest Samba 3.4. However, I started having second thoughts, and now I think going to a Windows Server for the PDC is the way to go. The main drivers to opt for a Windows Server would be its easy administration and the ability to use Windows 7 out of the box, without any registry hacks. My question(s) then is(are): How should I do this migration? Can I keep the same domain name? What will happen to the users? Will they be recreated and won't be identified by the workstations as being the same user, even if the actual username is the same? What steps would you recommend me to migrate from Samba to Windows Server? Bonus question: If you think staying in Samba is the way to go with my current setup, I'm also interested on your thoughts.

    Read the article

  • Apache2 will not start on OpenSUSE after enabling mod_pagespeed

    - by alpha1
    I have a linode VPS, running openSUSE 12.1 (A little outdated, but stable). I have installed the RPMS for mod_pagespeed. mod_pagespeed.conf has "ModPagespeed on". Restarting apache fails after enabling pagespeed. the errors are not very helpful. li361-39:/usr/lib64/apache2 # a2enmod pagespeed li361-39:/usr/lib64/apache2 # service apache2 restart redirecting to systemctl Job failed. See system logs and 'systemctl status' for details. li361-39:/usr/lib64/apache2 # systemctl status apache2.service apache2.service - apache Loaded: loaded (/lib/systemd/system/apache2.service; enabled) Active: failed since Thu, 06 Jun 2013 20:49:00 +0000; 1s ago Process: 6701 ExecStop=/usr/sbin/httpd2 -D SYSTEMD -k stop (code=exited, status=0/SUCCESS) Process: 6704 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE) Main PID: 6637 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/apache2.service li361-39:/usr/lib64/apache2 # a2dismod pagespeed li361-39:/usr/lib64/apache2 # service apache2 restart redirecting to systemctl li361-39:/usr/lib64/apache2 # And the error log (/var/log/apache2/error_log) is useless as well. [Thu Jun 06 20:48:59 2013] [notice] caught SIGTERM, shutting down [Thu Jun 06 20:49:12 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Thu Jun 06 20:49:13 2013] [notice] Apache/2.2.21 (Linux/SUSE) mod_ssl/2.2.21 OpenSSL/1.0.0k PHP/5.4.15 configured -- resuming normal operations EDIT This is from /var/log/messages Jun 12 14:24:14 li361-39 start_apache2[27951]: httpd2-prefork: Syntax error on line 116 of /etc/apache2/httpd.conf: Syntax error on line 34 of /etc/apache2/sysconfig.d/loadmodule.conf: Cannot load /usr/lib64/apache2/mod_pagespeed.so into server: /usr/lib64/apache2/mod_pagespeed.so: undefined symbol: ap_unixd_config Full Log is here: http://pastebin.com/hjnbZZTr I've tried looking for other logs and checking the mod_pagespeed.conf against posts claiming it works and nothing is striking as wrong. Any Ideas?

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • Trying to understand Wireless N vs Wireless AC

    - by EGHDK
    Whenever a new wireless standard gets approved you expect faster speeds and longer range. From everything that I've read about it, it seems that AC will only transfer over the 5GHz band and up to 3Gbps. Studying the new AC routers on the market, it seems that they will transfer over 5GHz and 2.4GHz. And 5GHz will only transfer at 1.3Gbps. Which isn't what AC is supposed to be. I know there is a difference between what the standard actually says, and what products will actually do, but is there any reason for this? Is there any other main differences between AC and N? I've heard people discussing AC and saying that it's finally "fixing" what N was supposed to fix... what do they mean by that? Any security benefits? I have seen this image online: Will AC really do that? Will that require an AC network card in my laptop for that to actually happen? Lastly, will the router only be able to communicate with AC devices if I have beamforming technology on? I know it's a ton of questions, but most articles online seem to be outdated, and don't provide too much reliability.

    Read the article

  • Xen 4.1.2 unable to boot

    - by Devator
    I have installed Xen 3.1.2 perfectly, and it's running fine. However since that version is way outdated I just updated it to 4.1.2 by adding the gitco repository and then yum update. It installed fine, modified my grub.conf to reflect the changes but then on a reboot, it simply doesn't come back online (I can't see what's going on, as it's a rented dedicated server). What are my options? Booting into rescue mode and using the older kernel works fine, it will come back up. But once I use the xen.gz-4.1.2kernel, it won't come back up anymore and I need to use the rescue image.. My /boot/grub/grub.confis as follows: title CentOS (2.6.18-308.1.1.el5xen) root (hd0,1) #kernel /xen.gz-2.6.18-308.1.1.el5 dom0_mem=1024M kernel /xen.gz-4.1.2 module /vmlinuz-2.6.18-308.1.1.el5xen ro root=/dev/md1 module /initrd-2.6.18-308.1.1.el5xen.img When I uncomment the 3.1.2 kernel, it works fine, but booting with the 4.1.2 kernel fails and I have no idea what's going on. Hence my question: what are my options?

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • Errors in Homebrew on OS X Lion

    - by Bo Tian
    I just ran the Homebrew script as described in the installation page. I then ran brew doctor in Terminal, and it returned several errors. I'm not sure how to fix those errors, please help. brew doctor Error: Some directories in /usr/local/share/man aren't writable. This can happen if you "sudo make install" software that isn't managed by Homebrew. If a brew tries to add locale information to one of these directories, then the install will fail during the link step. You should probably `chown` them: /usr/local/share/man/de /usr/local/share/man/de/man1 Error: You have Xcode 4.2, which is outdated. Please install Xcode 4.3. Error: Unbrewed dylibs were found in /usr/local/lib. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected dylibs: /usr/local/lib/libcdt.5.dylib /usr/local/lib/libcgraph.6.dylib /usr/local/lib/libgraph.5.dylib /usr/local/lib/libgvc.6.dylib /usr/local/lib/libgvpr.2.dylib /usr/local/lib/libpathplan.4.dylib /usr/local/lib/libxdot.4.dylib Error: Unbrewed .pc files were found in /usr/local/lib/pkgconfig. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected .pc files: /usr/local/lib/pkgconfig/libcdt.pc /usr/local/lib/pkgconfig/libcgraph.pc /usr/local/lib/pkgconfig/libgraph.pc /usr/local/lib/pkgconfig/libgvc.pc /usr/local/lib/pkgconfig/libgvpr.pc /usr/local/lib/pkgconfig/libpathplan.pc /usr/local/lib/pkgconfig/libxdot.pc Error: /usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: 2to3 Consider amending your PATH so that /usr/local/bin is ahead of /usr/bin in your PATH.

    Read the article

  • Apache stops responding to http requests -- https continues to work

    - by Apropos
    Okay. Very strange problem that I'm having here. I just recently updated to Apache 2.4.2 from 2.2.17, mostly to try to get name-based SSL VirtualHosts working (although they should have been working on 2.2.17). Server is Win2008 R2 (so x64 by definition) running with PHP 5.4.3 and MySQL 5.1.40 (outdated, I know). When I launch the server, it initially works fine. Responds to all requests, VirtualHosts all in order. However, after an uncertain amount of time (appears to only take a few minutes for the most part, but sometimes takes hours), it stops responding to regular HTTP requests (on any VirtualHost). HTTPS continues to work. No errors in the log, and nothing in the access logs when I attempt to connect. I'm having a hard time finding the source of this error given its intermittent nature. When removing all SSL-based VirtualHosts, it seemingly increased stability (still responding to HTTP requests twelve hours later). This could be mere coincidence, though. Entirety of SSL VirtualHost is as follows, should there happen to be a problem with it. <VirtualHost *:443> DocumentRoot "C:\Server\www\virtualhosts\mysite.net" ErrorLog logs/ssl.mysite.net-error_log CustomLog logs/ssl.mysite.net-access_log common env=!dontlog SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.crt SSLCertificateKeyFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.key SSLCertificateChainFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/sub.class1.server.ca.pem SSLCACertificateFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/ca.pem </VirtualHost> Any ideas what I'm missing?

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Give access to specific services on Windows 7 Professional machines?

    - by Chad Cook
    We have some machines running Windows 7 Professional at our office. The typical user needs to have access to stop and start a service for a local program they run. These machines have a local web server and database installed and we need to restrict access to certain folders and services related to the web server and database for these users. The setup I have tried so far is to add the typical user as a Power User. I have been able to successfully restrict them from accessing certain folders (as far as I can tell) but now they do not have access to the service needed for starting and stopping the local program. My thought was to give them access to the specific service but I have not had any luck yet. In searching the web for solutions the only results I have found relate to Windows Server 2000 and 2003 and involve creating security templates and databases through the Microsoft Management Console. I am hesitant to try an approach like this as these articles are typically older and I worry this method is outdated. Is there a better way to accomplish the end goal of giving the user permission to run the service and restrict their access to certain folders? If any clarification is needed on the setup or what we are trying to achieve, please let me know. Thanks in advance.

    Read the article

  • "What happens?" server performance monitor

    - by AlexAtNet
    Hello! After reviewing some thread about server monitoring software I end up with a simple question: Which of the server monitoring tools should I use for automatic detection of "abnormal" situations with recommendations on how to fix them? I look for software that checks the system performance after installation and calculate some average load values (memory, CPU, etc). And when something happens (CPU load is increased to 20%) then it tries to detect a reason for this. If it is apache, it should check for access logs. If mysql, it should check mysql logs and tell me what happens. It this is because some user decodes a lot of images, I'd like to know which command is executed, when and user name. The same for disk usage, memory, number of processes, threads and so on. Ideally, this software should periodically checks the system and report problems: errors in PHP error log, outdated packages, security vulnerabilities. In other word I'm looking a software that will keep my simple Debian/Apache/PHP/MySQL server without forcing me to monitor the charts every day. I hope that such program exists. Thanks, Alex

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >