Search Results

Search found 7992 results on 320 pages for 'mail delay'.

Page 265/320 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Managing Joomla via Android

    Surprisingly, it was only today that I actually looked for possible solutions to write more content for my blog. Since quite some time I'm using my Samsung Galaxy Tab 10.1 for all kind of social media activities like Google+, FB, etc. but also for my casual mail during the evening hours. And yes, I feel a little bit guilty about missing the chance to use my tablet to write some content here... OK, only a little bit. ;-) These are not the droids you are looking for But those lazy times are over! While searching the Play Store with the expression 'joomla' I got three interesting hits: - Joomla Admin Mobile! - Joooid - Joomla! Security Checklist After reading the reviews I installed the two later apps. Joomla! Security Checklist The author clearly outlines here that the app is primarily for his personal purpose to have safety checklist at hand at anytime. I guess that any reader of this article has an Android based smartphone or tablet, so that simple app should be part of your toolbox when using Joomla! for your websites. Joooid plugin & app Although I was looking for an app that could work with the default XML RPC interface of Joomla I have to admit that this combination of an enhanced Web service suits me better, mainly due to performance reason. The official website has not only the downloads for Joomla versions 1.5 - 2.5 but also very good and easy to follow step-by-step instructions to prepare your server for the Android app. It will take you less than 5 minutes to get it up and running. For safety reasons, I recommend that you should configure your Web server to have an additional authentication layer on the plugins folder. The smartphone app has the ability to run against HTTP authentication. Personally, I like the look and feel of the app. It is a little bit different compared to the web UI but still easy to use. In fact, this article is the first one written in the Joooid app. At the moment, I only miss the ability to have list tags. Quick and easy Writing full-fledged articles with images, a couple of hyperlinks and some styling here and there should be left to the desktop. At least for the moment. Let's see whether I'm going to change my mind on this during the upcoming months... I'll give it a try, and hope to publish at least once per month to write some content using Joooid. Actually, it would be great to have some feedback about other Joomla! clients in the wild.

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Why does the login screen fail to appear?

    - by a different ben
    My system: Dell Precision T3500 nVidia Quadro NVS 295 Ubuntu 12.04 x86_64 (3.2.0-32) Essential problem: On boot my system won't get past the splash screen. I can switch to another virtual terminal and log in, I can also ssh from another system -- so it appears that the problem might be with the display manager. How can I diagnose and fix this problem? More info: From a VT I can issue sudo lightdm restart, and this will bring up the login screen and and I can continue from there. So I do have access to my system. Update-manager recently updated a number of packages, including a bunch of x11 and xorg packages, some nVidia drivers, rpcbind, etc etc. My boot log (if that is any guidance) says the following: fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 rpcbind: Cannot open '/run/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory) rpcbind: Cannot open '/run/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory) /dev/sda1: clean, 597650/1525920 files, 3963433/6103296 blocks /dev/sda7: clean, 11/6406144 files, 450097/25608703 blocks /dev/sda5: clean, 158323/1525920 files, 1886918/6103296 blocks /dev/sda8: clean, 250089/107929600 files, 111088810/431689728 blocks Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd * Starting AppArmor profiles [80G [74G[ OK ] Loading the saved-state of the serial devices... /dev/ttyS0 at 0x03f8 (irq = 4) is a 16550A * Starting ClamAV virus database updater freshclam [80G [74G[ OK ] * Starting Name Service Cache Daemon nscd [80G [74G[ OK ] * Starting modem connection manager[74G[ OK ] * Starting K Display Manager[74G[ OK ] * Starting mDNS/DNS-SD daemon[74G[ OK ] * Stopping GNOME Display Manager[74G[ OK ] * Stopping K Display Manager[74G[ OK ] * Starting bluetooth daemon[74G[ OK ] * Starting network connection manager[74G[ OK ] * Starting Postfix Mail Transport Agent postfix [80G [74G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting VirtualBox kernel modules [80G [74G[ OK ] * Starting the Winbind daemon winbind [80G [74G[ OK ] saned disabled; edit /etc/default/saned * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ] * Checking battery state... [80G [74G[ OK ] nxsensor is disabled in '/usr/NX/etc/node.cfg' Trying to start NX server: NX 122 Service started. NX 999 Bye. Trying to start NX statistics: NX 723 Cannot start NX statistics: NX 709 NX statistics are disabled for this server. NX 999 Bye. * Stopping System V runlevel compatibility[74G[ OK ] * Starting Mount network filesystems[74G[ OK ] * Stopping Mount network filesystems[74G[ OK ] * Stopping regular background program processing daemon[74G[ OK ] * Starting regular background program processing daemon[74G[ OK ] * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ]

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • jQuery - discrepency between classname and selectors

    - by Ciel
    I have the following code that I wrote, which I personally found to be pretty nice. It takes a <ul> and it drops down the contents when clicked. But I am having a disconnect here in comprehension, and one I had to do what I feel is a 'dirty hack' to solve. The problem is that I do not want the class `'sidebar-dropdown-open' to be so 'hardwired' in the plugin. However I discovered that there is a very stark difference between... $('.sidebar-dropdown-open') and 'sidebar-dropdown-open and even '.sidebar-dropdown-open. I 'solved' this problem by including two different 'parameters' in my plugin, but I was wondering if someone might give me some insight as to how I could perform this better, and why this was behaving this way. wiring (document load) $(document).ready(function () { $('[data-role="sidebar-dropdown"]').drawer({ open: 'sidebar-dropdown-open', css: '.sidebar-dropdown-open' }); }); html <ul> <li class=" dropdown" data-role="sidebar-dropdown"> <a href="pages/.." class="remote">Link Text</a> <ul class="sub-menu light sidebar-dropdown-menu"> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> </ul> </li> </ul> javascript (function ($) { $.fn.drawer = function (options) { // Create some defaults, extending them with any options that were provided var settings = $.extend({ open: 'open', css: '.open' }, options); return this.each(function () { $(this).on('click', function (e) { // slide up all open dropdown menus $(settings.css).not($(this)).each(function () { $(this).removeClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideUp('fast'); $menu.removeClass('active'); }); // mark this menu as open $(this).addClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideDown(100); $menu.addClass('active'); e.preventDefault(); e.stopPropagation(); }).on("mouseleave", function () { $(this).children(".dropdown-menu").hide().delay(300); }); }) }; })(jQuery); I have tried using settings.open and demanding that it just be a className (.open), etc. - but that does not seem to work. It seems to get ignored by the removeClass function.

    Read the article

  • Migrating to Natty (or any other future versions of ubuntu)

    - by Nik
    I am hoping that this question would help other ubuntu users when migrating to a newer version of ubuntu. This should have all the info that they need. So please when you answer try to phrase them into points for easy understanding. I understand that some questions that I ask might have been asked before by other users. In that case just provide the links to those questions. I am running ubuntu 10.10 Maverick Meerkat in case that is important. I can say for sure that a clean install is definitely better than an upgrade since it gives you an opportunity to clean your system and get a fresh start. However some of us like to retain certain software configuration or files etc. The questions are as follows, How do you save the configuration files of certain application like for instance Thunderbird, firefox, etc...so that you can basically paste in the new version of ubuntu? (Thunderbird for instance has all my mail, so I definitely would like to backup its configuration and then use it the new installation that I do) I have some applications like MATLAB and Maple (Based on JAVA) installed. When I migrate, can I just copy the entire installation folder to the new version of ubuntu? Would it still work as now if I do that? When doing a backup which folders should be backed up? Obviously your personal files would be backup. But other than that, is it necessary to back up stuff in the home folder, /usr/bin etc? I have BURG installed. I am guessing that would be erased when I do a clean install along with the program's configuration and everything. How can I do a backup of it? I am dual booting my ubuntu alongside with Windows 7. When I perform the clean install of ubuntu, would GRUB (bootloader) be removed and in anyway jeopardize my windows installation? Over time I have added a lot of PPA which are of course compatible with my current ubuntu version. How do I make a backup of all my PPA and would they be compatible to the newer version of ubuntu when I restore them? I hope this covers all the questions or doubts that a user might face when thinking about performing a clean install of his system. If I missed anything please mention it as a comment and I will add it to my answer.

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • What Is .recently-used.xbel and How Do I Delete It for Good?

    - by The Geek
    If you’re reading this article, you’ve probably noticed the .recently-used.xbel file in the root of your User folder, and you’re wondering why it keeps constantly coming back even though you repeatedly delete it. So What Is It? The quick answer is that it’s part of the GTK+ library used by a number of cross-platform applications, perhaps the most well-known of which is the Pidgin instant messenger client. As the name implies, the file is used to store a list of the most recently used files. In the case of Pidgin, this comes into play when you are transferring files over IM, and that’s when the file will appear again. Note: this is actually a known and reported bug in Pidgin, but sadly the developers aren’t terribly responsive when it comes to annoyances. Pidgin seems to go for long periods of time without any updates, but we still use it because it’s open-source, cross-platform, and works well. How Do I Get Rid of It? Unfortunately, there’s no way to easily get rid of it, apart from using a different application. If you need to transfer files over Pidgin, the file is going to re-appear… but there’s a quick workaround! The general idea is to set the file properties to Hidden and Read-only. You’d think you could just set it to Hidden and be done with it, but Pidgin will re-create the file every time, so instead we’re leaving the file there and preventing it from being accessed. You could also totally remove access through the Security tab if you wanted to, but this worked fine for me… as you can see, no more file in the folder. Of course, you can’t have the show hidden files and folders option turned on, or the file will continue to show up. Want to get really geeky? You can toggle hidden files with a shortcut key. Similar Articles Productive Geek Tips Hide Recently Used Documents/Programs From the Windows Vista Start MenuQuick Tip: Windows Vista Temp Files DirectoryDelete Wrong AutoComplete Entries in Windows Vista MailDisable Delete Confirmation Dialog in Windows 7 or VistaHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily

    Read the article

  • What You Said: Staying Productive While Working from home

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your telecommuting/work-from-home productivity tips. Now we’re back with a tips and tricks roundup; read on to see how your fellow readers stay focused at home. By far and away the most common technique deployed as carefully isolating work from home life. Carol writes: I love working from home and have done so for 6 years now. I have a routine just as if I was going to an office, except my commute is 12 steps. I get ready for work, grab my purse and smart phone and go to up the steps to my office. I maintain a separate phone line and voice mail for work that I cannot answer from anywhere but my work desk. I use call forwarding when I travel so only my office phone number is published. I have VOIP phone service so I can forward calls from the internet if I forget, or need to change where the phone is forwarded. I do have a wired and wireless head set so I can go get a cold drink if on one of those long boring conference calls. I plan my ‘get off from work’ time and try to stick to it, as with any job some days I am late getting off, but it all works out. I make sure my office is for work only, any other computer play time is in a different part of the house on different computer. My office has laptop, dock, couple of monitors, multipurpose printer, fax, scanner, file cabinets – just like the office at a company. I just also happen to have a couple of golden retrievers that come to work with me and usually lay quietly until 5, and yes they know it is 5pm sometimes before I do. For me, one of the biggest concerns when working from home is not being unproductive, but the danger of never stopping work. You could keep going and going because let’s face it – the company will let you do it, so I set myself up to prevent that and maintain a separateness. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • FREE EBus (ATG) webcast on Troubleshooting Invalid Objects

    - by cwarticki
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} E-Business Suite Applications Technology Group (ATG) Advisor Webcast Program Invitation : Advisor Webcast December 2012 In December 2012 we have scheduled an Advisor Webcast, where we want to give you a closer look into the invalid objects in an E-Business Suite Environment. E-Business Suite – Troubleshooting invalid objects Agenda : · Introduction · Activities that generate invalid objects · EBS Architecture · EBS Patching Concepts · Troubleshooting Invalid Objects · References EMEA Session : o Tuesday December 11th, 2012 o at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia o Details & Registration : Note 1501696.1 o Direct link to register in WebEx US Session : o Wednesday December 12th, 2012 o at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern o Details & Registration : Note 1501697.1 o Direct link to register in WebEx If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' undefined symbol: ZVAL_DELREF

    - by crmpicco
    I have an issue where I am unable to use JSON, which would appear to be because of the following error. There is another thread on this forum this touches on a similar issue, but it's not quite the same. I am using CentOS 5.6 and have the following pear packages installed: [crmpicco@eq-www-php53 ~]$ pear list PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.7 stable Auth_SASL 1.0.2 stable Console_Getopt 1.3.1 stable Image_Barcode 1.1.2 stable Mail 1.1.14 stable Net_SMTP 1.2.10 stable Net_Socket 1.0.8 stable PEAR 1.9.4 stable Structures_Graph 1.0.4 stable XML_RPC 1.5.4 stable XML_Util 1.2.1 stable json 1.2.1 stable and have the following PHP packages installed: [crmpicco@eq-www-php53 ~]$ yum list installed | grep php php.x86_64 5.3.10-1.w5 installed php-cli.x86_64 5.3.10-1.w5 installed php-common.x86_64 5.3.10-1.w5 installed php-devel.x86_64 5.3.10-1.w5 installed php-gd.x86_64 5.3.10-1.w5 installed php-ldap.x86_64 5.3.10-1.w5 installed php-mcrypt.x86_64 5.3.10-1.w5 installed php-mysql.x86_64 5.3.10-1.w5 installed php-pdo.x86_64 5.3.10-1.w5 installed php-pear.noarch 1:1.9.4-1.w5 installed php-pear-Net-Socket.noarch 1.0.8-1.el5.centos installed php-soap.x86_64 5.3.10-1.w5 installed php-xml.x86_64 5.3.10-1.w5 installed The error: [crmpicco@eq-www-php53 ~]$ php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 PHP 5.3.10 (cli) (built: Feb 2 2012 23:23:12) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies My repolist reads as: [crmpicco@eq-www-php53 ~]$ yum repolist Loaded plugins: changelog, fastestmirror Excluding Packages in global exclude list Finished repo id repo name status base CentOS-5 - Base 3,548+43 epel Extra Packages for Enterprise Linux 5 - x86_64 6,815+156 extras CentOS-5 - Extras 245+23 rpmforge Red Hat Enterprise 5 - RPMforge.net - dag 11,016+67 updates CentOS-5 - Updates 233 webtatic Webtatic Repository 5 - x86_64 211+183 repolist: 22,068 I am getting HTTP 500 errors everywhere that I use JSON so my application is non functional right now.

    Read the article

  • Plesk Postfix SMTP 550 5.7.1 "Command rejected" for one external sender

    - by Mnebuerquo
    My server is rejecting emails from one external sender. I suspect this might be misconfiguration on the sending server, but I'm not sure from these error messages. The non-delivery report message the sender gets contains this text: #5.7.1 smtp;550 5.7.1 Command rejected> #SMTP# I also see this message in /var/messages at about the same time as the rejection message was sent, though I'm not sure if it's actually related: Nov 29 12:29:28 localhost postfix/smtpd[31829]: sql_sqlite3 plugin: no result found I'm using Plesk 10.4.4 Update #47, Centos 6.2, Postfix 2.8.4-11100615 on my mail server. This is only happening for one sender so far, but I found a Google result on experts-exchange.com which seemed to identify the same problem and with the same sending domain. This was posted back in June, and currently has no answers, so even if I was a paying customer it wouldn't be answered. (http://www.experts-exchange.com/Software/Server_Software/Email_Servers/Q_27760746.html) The generating server is bigfish.com. What I need to determine is if this is a problem on my server or a problem with bigfish.com. Is there more information I can find in config files, logs, etc. to figure this out?

    Read the article

  • Zabbix Log Monitoring - Duplicate alerts

    - by ArunS
    I am configured Zabbix to monitor my Jboss Server logs for Erros and exclude some know errors. This setup is working with one issue. Zabbix will send me alerts when there is a new "ERROR" entry in the log file. But sometimes I get multiple alerts for the same event. For example, I got 5 alerts with the same time stamp "2012-06-25 07:55:56,864 ERROR". The duplicate alerts count is not constant, sometimes I get 2 sometimes 5 or 11. I checked the Monitoring Latest data in the GUI, and found that there is no duplicate entries. I have given my configuration of the log monitoring below. I am using latest version of zabbix server(2.0) Item configuration: Description: Server Error Monitoring. Key: log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip] Type: Zabbix Agent (Active) Type of information: Log Interval :30 Trigger configuration: Description: Found Error in Server Log. Expression: (({SERVER Error Monitoring - PS:log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip].regexp("can not execute")})=0) & (({SERVER Error Monitoring - PS:log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip].regexp("Unexpected redirect")})=0) Event generation: Normal + Multiple TRUE events Action configuration: Name: alert mail Event source: Trigger Enable escalations: Uncheck Default subject/message: Default Recovery message: Uncheck Action conditions: Trigger value = PROBLEM Action operations: Send message to User "Admin" Please help me fixing this issue.

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • Office365 SPF record has too many lookups

    - by Sammitch
    For some utterly ridiculous administrative reasons we've got a split domain with one mailbox on Office365 which requires us to add include:outlook.com to our SPF record. The problem with this is that that rule alone requires nine DNS lookups of the maximum of 10. Seriously, it's horrible. Just look at it: v=spf1 include:spf-a.outlook.com include:spf-b.outlook.com ip4:157.55.9.128/25 include:spfa.bigfish.com include:spfb.bigfish.com include:spfc.bigfish.com include:spf-a.hotmail.com include:_spf-ssg-b.microsoft.com include:_spf-ssg-c.microsoft.com ~all Given that we have our own large-ish mail system we need to have rules for a, mx, include:_spf1.mydomain.com, and include:_spf2.mydomain.com which puts us at 13 DNS lookups which causes PERMERRORs with strict SPF validators, and completely unreliable/unpredictable validation with non-strict/badly implemented validators. Is it possible to somehow eliminate 3 of those include: rules from the bloated outlook.com record, but still cover the servers used by O365? Edit: Commentors have mentioned that we should simply use the shorter spf.protection.outlook.com record. While that is news to me, and it is shorter, it's only one record shorter: spf.protection.outlook.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spf.messaging.microsoft.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com Edit² I suppose we can technically pare this down to: v=spf1 a mx include:_spf1.mydomain.com include:_spf2.mydomain.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com ~all but the potential issues I see with this are: We need to keep abreast of any changes to the parent spf.protection.outlook.com and spf.messaging.microsoft.com records. If anything is changed or [god forbid] added we would have to manually update ours to reflect that. With our actual domain name the record's length is 260 chars, which would require 2 strings for the TXT record, and I honestly don't trust that all of the DNS clients and SPF resolvers out there will properly accept a TXT record longer than 255 bytes.

    Read the article

  • Nginx http_mp4_module seam installed but dont work

    - by Tahola
    I try to use the http_mp4_module on my Ubuntu server but that didnt seem to work at all. When i check nginx -V i get : nginx version: nginx/1.1.19 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.1.19/debian/modules/chunkin-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-dav-ext-module --with-http_mp4_module and --with-http_flv_module are there, I also add on sites-available/domaine.conf location ~ .mp4$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; } location ~ .flv$ { flv; } and Nginx restarted witout error, everything seem ok but when i check my urls myvideo.mp4?start=60 return a 404 error (what i think is normal) and video.mp4?starttime=60 return the video but whatever the starttime number is i get the full video from the begining, did i miss something ?

    Read the article

  • SBS 2008 BPA Warnings After Migration From SBS 2003

    - by Nicholas Piasecki
    We just finished a we-know-just-enough-to-be-dangerous migration from SBS 2003 to SBS 2008, and things seem to have gone relatively smoothly. After running the SBS 2008 Best Practices Analyzer on the destination server, we've got three warning messages, and I can't tell if they're important or not. First, the easy one: SMTP Port (TCP 25 Status): The Edgetransport.exe process should listen on SMTP port 25, but that port is owned by the process. I don't think that this one is a big deal--e-mail is flowing through the SMTP connector. Since there are two spaces between "the" and "process," I'm assuming that for some reason BPA just couldn't figure out the owning process name and this is just some sloppy programming when displaying the message. (Indeed, on subsequent runs of the BPA this message goes away, and other times it comes back.) Now, two more scary sounding ones: No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs sub-domain in the forward lookup zone for Windows SBS 2008. and, similarly, No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs zone for Windows SBS 2008. Now for these two, everything appears to be functioning correctly--but I'm assuming this is a weird state as a result of the SBS 2003 to 2008 migration. Can anyone provide any pointers on how to fix it, or whether or not it can be safely ignored? Thanks!

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >