Search Results

Search found 5628 results on 226 pages for 'extension'.

Page 202/226 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • implementation of text editor in shell programming

    - by Arka Ghosh
    i have made a text editor in C. when i am changing the extension of that file from .c to .sh and compiling the file in the terminal,some error is shown,like for the global variables an external error is shown,and for the functions i have declared errors are shown there also.please help me to solve this. I am sending my code.. include include include int i,j,ec,fg,ec2; char fn[20],e,c,d; FILE *fp1,*fp2,fp; void Create(); void Append(); void Delete(); void Display(); int main() { do { printf("\n\t\t** TEXT EDITOR *"); printf("\n\n\tMENU:\n\t..\n "); printf("\n\t1.CREATE\n\t2.DISPLAY\n\t3.APPEND\n\t4.DELETE\n\t5.EXIT\n"); printf("\n\tEnter your choice: "); scanf("%d",&ec); switch(ec) { case 1: Create(); break; case 2: Display(); break; case 3: Append(); break; case 4: Delete(); break; case 5: exit(1); } }while(1); } void Create() { fp1=fopen("temp.txt","w"); printf("\n\tEnter the text and press '.' to save\n\n\t"); while(1) { c=getchar(); fputc(c,fp1); if(c == '.') { fclose(fp1); printf("\n\tEnter then new filename: "); scanf("%s",fn); fp1=fopen("temp.txt","r"); fp2=fopen(fn,"w"); while(!feof(fp1)) { c=getc(fp1); putc(c,fp2); } fclose(fp2); break; }} } void Display() { printf("\n\tEnter the file name: "); scanf("%s",fn); fp1=fopen(fn,"r"); if(fp1==NULL) { printf("\n\tFile not found!"); goto end1; } while(!feof(fp1)) { c=getc(fp1); printf("%c",c); } end1: fclose(fp1); printf("\n\n\tPress any key to continue.."); } void Delete() { printf("\n\tEnter the file name: "); scanf("%s",fn); fp1=fopen(fn,"r"); if(fp1==NULL) { printf("\n\tFile not found!"); goto end2; } fclose(fp1); if(remove(fn)==0) { printf("\n\n\tFile has been deleted successfully!"); goto end2; } else printf("\n\tError!\n"); end2: printf("\n\n\tPress any key to continue.."); getchar(); } void Append() { printf("\n\tEnter the file name: "); scanf("%s",fn); fp1=fopen(fn,"r"); if(fp1==NULL) { printf("\n\tFile not found!"); goto end3; } while(!feof(fp1)) { c=getc(fp1); printf("%c",c); } fclose(fp1); printf("\n\tType the text and press 'Ctrl+s' to append.\n"); fp1=fopen(fn,"a"); while(1) { c=getchar(); if(c==19) goto end3; if(c==13) { d='\n'; fputc(d,fp1); } else { fputc(c,fp1); } } end3: fclose(fp1); }

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • Are IE9 really good ?

    - by anirudha
    IE9 started a campaign for kill IE6 from the core because they know that IE6 is a big trouble or  problem for them for promote 9 version of IE. so they started a campaign for killing IE6. next time they kill IE 7 , 8,9 whenever they found this old version have a big problem for them to promote next version of IE.   Why they not make a update system who automatically update the browser and tell user to restart and update goes installed in the user system. well IE9 should learn from all other that they have very well design auto-update system who never give user in trouble that your browser goes old. Chrome and Firefox both update themselves and say user restart to enjoy another good version. in IE6 a big problem is that updates. no one sure that they installed new version of IE6 without any hassles and update goes install without any problem because they really know or care about “you need this to install this and this for this” so they thing “why I update IE whenever I am unsure that my browser goes update and I have no problem again” so they do nothing because their work done with no problem because common person used high profile application who work even in IE6. so they do nothing.    IE6 countdown website have designed a banner for warn or force user to upgrade to next version of IE. well there is no good reason for put the banner on website some of reason are:-   Windows 7 comes with pre-installed IE8 and Vista comes with upgrade version them IE6 so that is sure that you force a user who have Windows XP [luna] and if they want to upgrade IE then they can get IE8 not version 9 because IE9 is design for Windows 7 or Vista Service pack 2. so What is the use of update when user still have a outdate version too because IE8 is old version and not have any capability of HTML5 so forcing user by using the banner have no sense. I am not know why they all listed on website put the banner on their own website. it’s good that you offer user what they want instead of giving them a outdate version of IE again. My means to give a user list of browser they can try to enhance their browser experience instead of only IE.   IE9 build upon WPF and they spent more time on using WPF in IE instead of making user experience browser.  many thing is designed wrongly in IE first thing is tabs. the tabs in chrome are bigger and easily to move and same in Firefox even not have smooth tabbing. IE have same tabbing as chrome have but leak a point that it’s too small. if you really  want to move then sometime they create a problem that they going elsewhere from the current instance of IE.   Chrome have a big buttons, tabs and menu to enhance browser experience and Firefox have a good feature that you can make them bigger or small. you can put the icon for add-ons on the toolbar for easily use but IE have no relation with customization so we never can thinking about that.   When chrome provide lot’s of extensions and a  webstore for browser application and same feature in Firefox can be seen then there is no plugin in IE. really you can see their IE addons Website where no plugin listed for web development. even in the category or tag. as a response from many blog there is new for developer that new version of IE9 developer tool. well IE9 have three new tabs a blogger tell on their blog. when I trying them I found many thing but I still unable to edit the Css from the HTML tab and no plugin I found I can get to enhance IE9 web development. something more other provide never IE9 give me like personas , customization , browser extension or any other they used to tell a small thing customization  .   IE9 still have some problem with JavaScript that when I use Firefox and chrome and logout in both then my cookie is deleted but in IE it’s not done. it’s show me that IE9 still have different from other not for good thing even some bad thing too. When I trying to read a article that is written in Hindi using Unicode font I found that they show many thing misspelled. there is three Sha in Hindi but they all goes wrong in IE. the misprint thing is not that the writing  for the articles goes wrong. it’s problem or browser to rendering a font. the Firefox and chrome not give me this problem even opera render the font in italic style by decrease the font-size but all those work perfect.   in Pwn2Own the apple’s safari  and IE9 both are hacked. this is a awesome news for whose who thing that  open-source is lose in  Security and close-source is highly-secured software. well this is not a good parameter for talking about software. it’s should depend how much application tested and used. because more testing and more use of application make them better.   I  appreciate IE to making their new version 9 and good luck for them. there is a another matter that I personally found nothing on them.

    Read the article

  • Getting Started with ADF Mobile Sample Apps

    - by Denis T
    Getting Started with ADF Mobile Sample Apps   Installation Steps Install JDeveloper 11.1.2.3.0 from Oracle Technology Network After installing JDeveloper, go to Help menu and select "Check For Updates" and find the ADF Mobile extension and install this. It will require you restart JDeveloper For iOS development, be on a Mac and have Xcode installed. (Currently only Xcode 4.4 is officially supported. Xcode 4.5 support is coming soon) For Android development, have the Android SDK installed. In the JDeveloper Tools menu, select "Preferences". In the Preferences dialog, select ADF Mobile. You can expand it to select configure your Platform preferences for things like the location of Xcode and the Android SDK. In your /jdeveloper/jdev/extensions/oracle.adf.mobile/Samples folder you will find a PublicSamples.zip. Unzip this into the Samples folder so you have all the projects ready to go. Open each of the sample application's .JWS file to open the corresponding workspace. Then from the "Application" menu, select "Deploy" and then select the deployment profile for the platform you wish to deploy to. Try deploying to the simulator/emulator on each platform first because it won't require signing. Note: If you wish to deploy to the Android emulator, it must be running before you start the deployment.   Sample Application Details   Recommended Order of Use Application Name Description 1 HelloWorld The "hello world" application for ADF Mobile, which demonstrates the basic structure of the framework. This basic application has a single application feature that is implemented with a local HTML file. Use this application to ascertain that the development environment is set up correctly to compile and deploy an application. See also Section 4.2.2, "What Happens When You Create an ADF Mobile Application." 2 CompGallery This application is meant to be a runtime application and not necessarily to review the code, though that is available. It serves as an introduction to the ADF Mobile AMX UI components by demonstrating all of these components. Using this application, you can change the attributes of these components at runtime and see the effects of those changes in real time without recompiling and redeploying the application after each change. See generally Chapter 8, "Creating ADF Mobile AMX User Interface." 3 LayoutDemo This application demonstrates the user interface layout and shows how to create the various list and button styles that are commonly used in mobile applications. It also demonstrates how to create the action sheet style of a popup component and how to use various chart and gauge components. See Section 8.3, "Creating and Using UI Components" and Section 8.5, "Providing Data Visualization." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 4 JavaDemo This application demonstrates how to bind the user interface to Java beans. It also demonstrates how to invoke EL bindings from the Java layer using the supplied utility classes. See also Section 8.10, "Using Event Listeners" and Section 9.2, "Understanding EL Support." 5 Navigation This application demonstrates the various navigation techniques in ADF Mobile, including bounded task flows and routers. It also demonstrates the various page transitions. See also Section 7.2, "Creating Task Flows." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 6 LifecycleEvents This application implements lifecycle event handlers on the ADF Mobile application itself and its embedded application features. This application shows you where to insert code to enable the applications to perform their own logic at certain points in the lifecycle. See also Section 5.6, "About Lifecycle Event Listeners." Note: iOS, the LifecycleEvents sample application logs data to the Console application, located at Applications-Utilities-Console application. 7 DeviceDemo This application shows you how to use the DeviceFeatures data control to expose such device features as geolocation, e-mail, SMS, and contacts, as well as how to query the device for its properties. See also Section 9.5, "Using the DeviceFeatures Data Control." Note: You must also run this application on an actual device because SMS and some of the device properties do not function on an iOS simulator or Android emulator. 8 GestureDemo This application demonstrates how gestures can be implemented and used in ADF Mobile applications. See also Section 8.4, "Enabling Gestures." 9 StockTracker This application demonstrates how data change events use Java to enable data changes to be reflected in the user interface. It also has a variety of layout use cases, gestures and basic mobile patterns. See also Section 9.7, "Data Change Events."

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • postfix with mailman

    - by Thufir
    What should happen is that [email protected] should be delivered to that users inbox on localhost, user@localhost. Thunderbird works fine at reading user@localhost. I'm just using a small portion of postfix-dovecot with Ubuntu mailman. How can I get postfix to recognize the FQDN and deliver them to a localhost inbox? root@dur:~# root@dur:~# tail /var/log/mail.err;tail /var/log/mailman/subscribe;postconf -n Aug 27 18:59:16 dur dovecot: lda(root): Error: chdir(/root) failed: Permission denied Aug 27 18:59:16 dur dovecot: lda(root): Error: user root: Initialization failed: Initializing mail storage from mail_location setting failed: stat(/root/Maildir) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root, dir owned by 0:0 mode=0700) Aug 27 18:59:16 dur dovecot: lda(root): Fatal: Invalid user settings. Refer to server log for more information. Aug 27 20:09:16 dur postfix/trivial-rewrite[15896]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:19:17 dur postfix/trivial-rewrite[16569]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:27:00 dur postfix[17042]: fatal: usage: postfix [-c config_dir] [-Dv] command Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:59:07 dur postfix/postfix-script[17459]: error: unknown command: 'restart' Aug 27 22:59:07 dur postfix/postfix-script[17460]: fatal: usage: postfix start (or stop, reload, abort, flush, check, status, set-permissions, upgrade-configuration) Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:39:03 2012 (16734) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 21:40:37 2012 (16749) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 22:45:31 2012 (17288) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 22:45:46 2012 (17293) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 23:02:01 2012 (17588) test3: pending [email protected] 127.0.0.1 Aug 27 23:05:41 2012 (17652) test4: pending [email protected] 127.0.0.1 Aug 27 23:56:20 2012 (17985) test5: pending [email protected] 127.0.0.1 alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# there's definitely a transport problem: root@dur:~# root@dur:~# root@dur:~# grep transport /var/log/mail.log | tail Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: transport_maps lookup failure Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: transport_maps lookup failure root@dur:~# trying to add the transport file: EDIT root@dur:~# root@dur:~# touch /etc/postfix/transport root@dur:~# ll /etc/postfix/transport -rw-r--r-- 1 root root 0 Aug 28 00:16 /etc/postfix/transport root@dur:~# root@dur:~# cd /etc/postfix/ root@dur:/etc/postfix# root@dur:/etc/postfix# postmap transport root@dur:/etc/postfix# root@dur:/etc/postfix# cat transport

    Read the article

  • Download a .asp / .asx video file (Ubuntu)

    - by Adam Matan
    Hi, My local TV station offers streaming video of recorded documentaries, using a XML-like file with a.asx extension. Is there a way (preferably Ubuntu CLI) to download the file? Thanks, Adam PS - the file contents: <asx version="3.0"> <!-- GMX --> <param name="encoding" value="utf-8" /> <title>CastUP: V0109-msheni-Hayim_Hefer-120510 </title> <MOREINFO HREF = "" /> <PARAM NAME="Prebuffer" VALUE="true" /> <entry> <ref href="http://s3awm.castup.net/server12/31/176/17607833-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="http://s0dwm.castup.net/server12/31/176/17607833-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="http://s0ewm.castup.net/server12/31/176/17607833-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="http://s0fwm.castup.net/server12/31/176/17607833-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="http://s0gwm.castup.net/server12/31/176/17607833-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <PARAM NAME="CanSkipBack" VALUE="No"/> <PARAM NAME="CanSkipForward" VALUE="No"/> <PARAM NAME="CanSeek" VALUE="No"/> <title>mondial_2010 </title> <PARAM NAME="Prebuffer" VALUE="true" /> <PARAM NAME="CastUP_Content_Config" VALUE="" /> </entry> <entry> <PARAM NAME="EntryType" VALUE="Content" /> <param name="encoding" value="utf-8" /> <PARAM NAME="CastUP_AssociatedURL" VALUE="" /> <PARAM NAME="CastUP_Content_Config" VALUE="" /> <PARAM NAME="CastUP_Content_ClipMediaID" VALUE="5382858" /> <author>iba</author> <title>CastUP: V0109-msheni-Hayim_Hefer-120510 </title> <PARAM NAME="Prebuffer" VALUE="true" /> <ref href="mms://s3awm.castup.net/server12/31/174/17482045-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="mms://s0dwm.castup.net/server12/31/174/17482045-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="mms://s0ewm.castup.net/server12/31/174/17482045-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="mms://s0fwm.castup.net/server12/31/174/17482045-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> <ref href="mms://s0gwm.castup.net/server12/31/174/17482045-61.wmv?ct=IL&rg=BZ&aid=31&ts=0&cu=91A297E2-5359-416A-912B-2D9BC106E491" /> </entry> </asx>

    Read the article

  • Installing PhotoShop CS5 in windows XP error: (when up to 12% of the installation);

    - by Croplio
    Error Log: ---------- Exit Code: 6 -------------------------------------- Summary -------------------------------------- - 0 fatal error(s), 43 error(s), 41 warning(s) WARNING: The payload: Adobe Photoshop CS5 Core {7DFEBBA4-81E1-425B-BBAA-06E9E5BBD97E} requires a UI parent with following specification: Family: Photoshop ProductName: Adobe Photoshop CS5 Core_x64 This parent relationship is not satisfied, because this payload is not present in this session. WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure WARNING: Payload cannot be installed due to dependent operation failure ERROR: The following payload errors were found during install: ERROR: - Adobe CSXS Infrastructure CS5: Install failed ERROR: - Microsoft_VC90_ATL_x86: Install failed ERROR: - Adobe Media Player: Install failed ERROR: - Microsoft_VC90_CRT_x86: Install failed ERROR: - Adobe Photoshop CS5 Support: Install failed ERROR: - Adobe Bridge CS5: Install failed ERROR: - Microsoft_VC80_ATL_x86: Install failed ERROR: - DeviceCentral_DeviceCentral3LP-zh_CN: Install failed ERROR: - Adobe XMP Panels CS5: Install failed ERROR: - Photoshop Camera Raw: Install failed ERROR: - AdobeColorCommonSetCMYK: Install failed ERROR: - Adobe Mini Bridge CS5: Install failed ERROR: - Adobe Photoshop CS5 Chinese Language Pack_AdobePhotoshop12-zh_CN: Install failed ERROR: - Adobe ReviewPanel CS5: Install failed ERROR: - Microsoft_VC90_MFC_x86: Install failed ERROR: - Suite Shared Configuration CS5: Install failed ERROR: - Adobe Linguistics CS5: Install failed ERROR: - DeviceCentral: Failed due to Language Pack installation failure ERROR: - AdobeColorEU CS5: Install failed ERROR: - AdobeTypeSupport CS5: Install failed ERROR: - AdobeColorVideoProfilesCS CS5: Install failed ERROR: - AdobeColorCommonSetRGB: Install failed ERROR: - Adobe Photoshop CS5 Core: Failed due to Language Pack installation failure ERROR: - Adobe CSXS Extensions CS5: Install failed ERROR: - AdobeOutputModule: Install failed ERROR: - Microsoft_VC80_CRT_x86: Install failed ERROR: - Adobe WinSoft Linguistics Plugin CS5: Install failed ERROR: - AdobePDFL CS5: Install failed ERROR: - AdobeCMaps CS5: Install failed ERROR: - AdobeColorNA CS5: Install failed ERROR: - Required Common Fonts Installation: Install failed ERROR: - Adobe SwitchBoard 2.0: Install failed ERROR: - Microsoft_VC80_MFC_x86: Install failed ERROR: - AdobeColorPhotoshop CS5: Install failed ERROR: - Microsoft_VC80_MFCLOC_x86: Install failed ERROR: - PDF Settings CS5: Install failed ERROR: - Recommended Common Fonts Installation: Install failed ERROR: - Adobe Extension Manager CS5: Install failed ERROR: - AdobeColorJA CS5: Install failed ERROR: - AdobeJRE: Install failed ERROR: - Adobe ExtendScript Toolkit CS5: Install failed ERROR: - Adobe AIR: Install failed ------------------------------------------------------------------------------------- I hava tried many time and the issue is still there, any help will be appriciated, thanks!

    Read the article

  • Can't send mail from Windows Phone (Postfix server)

    - by Dominic Williams
    Some background: I have a Dovecot/Postfix setup to handle email for a few domains. We have imap and smtp setup on various devices (Macs, iPhones, PCs, etc) and it works no problem. I've recently bought a Windows Phone and I'm trying to setup the mail account on there. I've got the imap part working great but for some reason it won't send mail. mail.log with debug_peer_list I've put this on pastebin because its quite long: http://pastebin.com/KdvMDxTL dovecot.log with verbose_ssl Apr 14 22:43:50 imap-login: Warning: SSL: where=0x10, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server done A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client key exchange A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write change cipher spec A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x20, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:51 imap-login: Info: Login: user=<pixelfolio>, method=PLAIN, rip=109.151.23.129, lip=94.23.254.175, mpid=24390, TLS Apr 14 22:43:53 imap(pixelfolio): Info: Disconnected: Logged out bytes=9/331 Apr 14 22:43:53 imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [109.151.23.129] postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix debug_peer_list = 109.151.23.129 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 message_size_limit = 50240000 milter_default_action = accept milter_protocol = 2 mydestination = ks383809.kimsufi.com, localhost.kimsufi.com, localhost myhostname = ks383809.kimsufi.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname non_smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = domz.co.uk ruck.in vjgary.co.uk scriptees.co.uk pixelfolio.co.uk filmtees.co.uk nbsbar.co.uk virtual_alias_maps = hash:/etc/postfix/alias_maps doveconf -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.2-grsec-xxxx-grs-ipv6-64 x86_64 Ubuntu 11.10 auth_mechanisms = plain login log_path = /var/log/dovecot.log mail_location = mbox:~/mail/:INBOX=/var/mail/%u passdb { driver = pam } protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } verbose_ssl = yes Any suggestions or help greatly appreciated. I've been pulling my hair out with this for hours! EDIT This seems to be my exact problem, but I already have broken_sasl set to yes and the 'login' auth mechanism added? http://forums.gentoo.org/viewtopic-t-898610-start-0.html

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Apple Airport Express, Extreme and Time Capsules, BT Home Hub, Wireless Extenders confusion

    - by Jamie Hartnoll
    I post quite frequently in Stack Overflow, but use Superuser less frequently. Mainly as I don't change hardware often and rarely have software issues! I live in a small stone cottage, and have an office in a separate building across a yard. I have a BT Homehub which is located in the cottage and a series of Ethernet cables running across the yard to the office. This is fine for my wired stuff. My main office computers are PCs running Windows 7 Ultimate, and one on Win7 Home, all working fine. I also have an old laptop on Win XP which works fine wirelessly in the house for those evenings in front of the TV catching up on a bit of work. I also have an iPhone and an iPad. Recently, I have been trying to get WiFi in the office so I can use Adobe Shadow (or whatever it now is!) to improve mobile web development efficiency using my iPhone and iPad, so I bought this: http://www.ebuyer.com/393462-zyxel-wre2205-500mbps-powerline-wireless-n300-range-extender-wre2205-gb0101f Thinking that would be lovely just plugged into the socket by the door in the office, extending the perimeter of the WiFi from my Homehub. I can't get it to work properly! If I plug a laptop into its ethernet port I can get it to connect to the Homehub and give me a kinda of wired, wireless extender. If, however, I plug the ethernet port into my home hub, it then seems to extend the network, but only my iOs devices work, and all my wired stuff stops working, and seems to create an infinite loop where windows connects to my homehob, and then rather to the internet, it then connects back to the extender thing. Anyway... in the meantime, I took a fatal trip to the Apple Store, where I purchased an Airport Express... solely for the purpose of hooking my iOs devices up as wireless music players in the house. I knew it had WiFi, but didn't want to use that part as an extender, I didn't think it would work on a Homehub anyway. It doesn't work on a Homehub! I now have a new wireless network in the house, which, when anything connects to it cannot connect to the Internet, so it works ONLY as a wireless music player. I then borrowed some Powerline Adaptors from someone and realised that this whole thing was getting totally out of control! It seems all the technology is out there but it's so complicated to get the right series of devices. To further add to the confusion, I wouldn't mind a network hard drive. I bought one that broke and lost everything, so now we're on to looking at the Apple Time Capsules. So my question is... IF... I buy an Apple Time Capsule, can I: Hook that up to my Homehub, leaving the homehub connected to the Internet so my Hub phones still work, then disable wireless on the homehub Link up my Airport Express to the Time Capsule PROPERLY so it will connect to the Internet Do the above with an Apple TV box should I buy one in future Use the Time Capsule as a network hard drive to store video and music that can be viewed/listened to via my iOS devices/Apple TV/Aiport Express anywhere even with my main PC off (this currently stores all this data) Hope that the IOS devices like the WiFi from the TimeCapsule better than the Homehub and work without extension, or buy another Airport Express to get WiFI in the office. Or... should I buy an Airport Extreme and use a USB hard drive for the network drive?

    Read the article

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • Why can't I build Deluge?

    - by hugemeow
    Deluge is a BitTorrent Client. I am trying to build it from source, since I don't have privilege to install it as root. I am using python setup.py build. But, it failed following message, why? copying deluge/ui/web/themes/images/gray/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/panel copying deluge/ui/web/themes/images/gray/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/tabs copying deluge/ui/web/themes/images/yourtheme/window/right-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-right.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window creating build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/panel copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-lock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-unlock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/tabs running build_ext building 'libtorrent' extension gcc -pthread -shared -L/usr/lib64 -L/opt/local/lib -lboost_filesystem -lboost_date_time -lboost_iostreams -lboost_python -lboost_thread -lpthread -lssl -lz -o build/lib.linux-x86_64-2.4/deluge/libtorrent.so /usr/bin/ld: cannot find -lboost_filesystem collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 [mirror@innov deluge-1.3.5]$ echo $? 1 Edit 1: gcc version and os information $(which gcc) --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. cat /etc/issue CentOS release 5.7 (Final) Kernel \r on an \m Edit 2: boost is referenced by setup.py in deluge 114 if OS == "linux": 115 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 116 'libboost_filesystem-mt.so')): 117 boost_filesystem = "boost_filesystem-mt" 118 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 119 'libboost_filesystem.so')): 120 boost_filesystem = "boost_filesystem" 121 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 122 'libboost_date_time-mt.so')): 123 boost_date_time = "boost_date_time-mt" 124 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 125 'libboost_date_time.so')): 126 boost_date_time = "boost_date_time" 127 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 128 'libboost_thread-mt.so')): 129 boost_thread = "boost_thread-mt" 130 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 131 'libboost_thread.so')): 132 boost_thread = "boost_thread" 133 134 if 'boost_filesystem' not in vars(): 135 boost_filesystem = "boost_filesystem-mt" 136 if 'boost_date_time' not in vars(): 137 boost_date_time = "boost_date_time-mt" 138 if 'boost_thread' not in vars(): 139 boost_thread = "boost_thread-mt" 140 141 elif OS == "freebsd": 142 boost_filesystem = "boost_filesystem" 143 boost_date_time = "boost_date_time" 144 boost_thread = "boost_thread" 145 else: 146 boost_filesystem = "boost_filesystem-mt" 147 boost_date_time = "boost_date_time-mt" 148 boost_thread = "boost_thread-mt" 149 150 librariestype = [boost_filesystem, boost_date_time, 151 boost_thread, 'z', 'pthread', 'ssl', 'crypto']

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • Courier Maildrop error user unknown. Command output: Invalid user specified

    - by cad
    Hello I have a problem with maildrop. I have read dozens of webs/howto/emails but couldnt solve it. My objective is moving automatically spam messages to a spam folder. My email server is working perfectly. It marks spam in subject and headers using spamassasin. My box has: Ubuntu 9.04 Web: Apache2 + Php5 + MySQL MTA: Postfix 2.5.5 + SpamAssasin + virtual users using mysql IMAP: Courier 0.61.2 + Courier AuthLib WebMail: SquirrelMail I have read that I could use Squirrelmail directly (not a good idea), procmail or maildrop. As I already have maildrop in the box (from courier) I have configured the server to use maildrop (added an entry in transport table for a virtual domain). I found this error in email: This is the mail system at host foo.net I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: user unknown. Command output: Invalid user specified. Final-Recipient: rfc822; [email protected] Action: failed Status: 5.1.1 Diagnostic-Code: x-unix; Invalid user specified. ---------- Forwarded message ---------- From: test <[email protected]> To: [email protected] Date: Sat, 1 May 2010 19:49:57 +0100 Subject: fail fail An this in the logs May 1 18:50:18 foo.net postfix/smtpd[14638]: connect from mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/smtpd[14638]: 8A9E9DC23F: client=mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/cleanup[14643]: 8A9E9DC23F: message-id=<[email protected]> May 1 18:50:19 foo.net postfix/qmgr[14628]: 8A9E9DC23F: from=<[email protected]>, size=1858, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/pickup[14627]: 1D4B4DC2AA: uid=5002 from=<[email protected]> May 1 18:50:23 foo.net postfix/cleanup[14643]: 1D4B4DC2AA: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/pipe[14644]: 8A9E9DC23F: to=<[email protected]>, relay=spamassassin, delay=3.8, delays=0.55/0.02/0/3.2, dsn=2.0.0, status=sent (delivered via spamassassin service) May 1 18:50:23 foo.net postfix/qmgr[14628]: 8A9E9DC23F: removed May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: from=<[email protected]>, size=2173, nrcpt=1 (queue active) **May 1 18:50:23 foo.netpostfix/pipe[14648]: 1D4B4DC2AA: to=<[email protected]>, relay=maildrop, delay=0.22, delays=0.06/0.01/0/0.15, dsn=5.1.1, status=bounced (user unknown. Command output: Invalid user specified. )** May 1 18:50:23 foo.net postfix/cleanup[14643]: 4C2BFDC240: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/qmgr[14628]: 4C2BFDC240: from=<>, size=3822, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/bounce[14651]: 1D4B4DC2AA: sender non-delivery notification: 4C2BFDC240 May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: removed May 1 18:50:24 foo.net postfix/smtp[14653]: 4C2BFDC240: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.211.97]:25, delay=0.91, delays=0.02/0.03/0.12/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1272739824 37si5422420ywh.59) May 1 18:50:24 foo.net postfix/qmgr[14628]: 4C2BFDC240: removed My config files: http://lar3d.net/main.cf (/etc/postfix) http://lar3d.net/master.c (/etc/postfix) http://lar3d.net/local.cf (/etc/spamassasin) http://lar3d.net/maildroprc (maildroprc) If I change master.cf line (as suggested here) maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d ${recipient} with maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d vmail ${recipient} I get the email in /home/vmail/MailDir instead of the correct dir (/home/vmail/foo.net/info/.SPAM ) After reading a lot I have some guess but not sure. - Maybe I have to install userdb? - Maybe is something related with mysql, but everything is working ok - If I try with procmail I will face same problem... - What are flags DRhu for? Couldnt find doc about them - In some places I found maildrop line with more parameters flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d $ ${recipient} ${extension} ${recipient} ${user} ${nexthop} ${sender} I am really lost. Dont know how to continue. If you have any idea or need another config file please let me know. Thanks!!!

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • Installing chrome in Centos 6.2 (Final)

    - by usjes
    I need to install chrome in a dedicated centos server where I only access via ssh, it doesn't have X or any windows graphical stuff. I need it to be able to pack extensions using google-chrome --pack-extension. I tried adding this to /etc/yum.repos.d/google.repo [google-chrome] name=google-chrome - 32-bit baseurl=http://dl.google.com/linux/chrome/rpm/stable/i386 enabled=1 gpgcheck=1 gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub And then yum install google-chrome-stable, but there's a huge list of dependencies problems: How can I install chrome without breaking anything else? UPDATE: Ok, I installed perl-CGI from .rpm because yum couldn't find it, now dependencies resolve and it show me this list of packages to install: Dependencies Resolved ============================================================================================================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================================================================================================= Installing: google-chrome-stable x86_64 19.0.1084.52-138391 google-chrome 35 M Installing for dependencies: ConsoleKit x86_64 0.4.1-3.el6 base 82 k ConsoleKit-libs x86_64 0.4.1-3.el6 base 17 k GConf2 x86_64 2.28.0-6.el6 base 964 k ORBit2 x86_64 2.14.17-3.1.el6 base 168 k bc x86_64 1.06.95-1.el6 base 110 k cdparanoia-libs x86_64 10.2-5.1.el6 base 47 k cups x86_64 1:1.4.2-44.el6_2.3 updates 2.3 M dbus x86_64 1:1.2.24-5.el6_1 base 207 k desktop-file-utils x86_64 0.15-9.el6 base 47 k ed x86_64 1.1-3.3.el6 base 72 k eggdbus x86_64 0.6-3.el6 base 91 k foomatic x86_64 4.0.4-1.el6_1.1 base 251 k foomatic-db noarch 4.0-7.20091126.el6 base 980 k foomatic-db-filesystem noarch 4.0-7.20091126.el6 base 4.4 k foomatic-db-ppds noarch 4.0-7.20091126.el6 base 19 M ghostscript x86_64 8.70-11.el6_2.6 updates 4.4 M ghostscript-fonts noarch 5.50-23.1.el6 base 751 k gstreamer x86_64 0.10.29-1.el6 base 764 k gstreamer-plugins-base x86_64 0.10.29-1.el6 base 942 k gstreamer-tools x86_64 0.10.29-1.el6 base 23 k iso-codes noarch 3.16-2.el6 base 2.4 M lcms-libs x86_64 1.19-1.el6 base 100 k libIDL x86_64 0.8.13-2.1.el6 base 83 k libXScrnSaver x86_64 1.2.0-1.el6 base 19 k libXfont x86_64 1.4.1-2.el6_1 base 128 k libXv x86_64 1.0.5-1.el6 base 21 k libfontenc x86_64 1.0.5-2.el6 base 24 k libgudev1 x86_64 147-2.40.el6 base 59 k libmng x86_64 1.0.10-4.1.el6 base 165 k libogg x86_64 2:1.1.4-2.1.el6 base 21 k liboil x86_64 0.3.16-4.1.el6 base 121 k libtheora x86_64 1:1.1.0-2.el6 base 129 k libvisual x86_64 0.4.0-9.1.el6 base 135 k libvorbis x86_64 1:1.2.3-4.el6_2.1 updates 168 k mailx x86_64 12.4-6.el6 base 234 k man x86_64 1.6f-29.el6 base 263 k mesa-libGLU x86_64 7.11-3.el6 base 201 k nvidia-graphics195.30-libs x86_64 195.30-120.el6 atrpms 13 M openjpeg-libs x86_64 1.3-7.el6 base 59 k pax x86_64 3.4-10.1.el6 base 69 k phonon-backend-gstreamer x86_64 1:4.6.2-20.el6 base 125 k polkit x86_64 0.96-2.el6_0.1 base 158 k poppler x86_64 0.12.4-3.el6_0.1 base 557 k poppler-data noarch 0.4.0-1.el6 base 2.2 M poppler-utils x86_64 0.12.4-3.el6_0.1 base 73 k portreserve x86_64 0.0.4-4.el6_1.1 base 22 k qt x86_64 1:4.6.2-20.el6 base 4.0 M qt-sqlite x86_64 1:4.6.2-20.el6 base 50 k qt-x11 x86_64 1:4.6.2-20.el6 base 12 M qt3 x86_64 3.3.8b-30.el6 base 3.5 M redhat-lsb x86_64 4.0-3.el6.centos base 24 k redhat-lsb-graphics x86_64 4.0-3.el6.centos base 12 k redhat-lsb-printing x86_64 4.0-3.el6.centos base 11 k sgml-common noarch 0.6.3-32.el6 base 43 k time x86_64 1.7-37.1.el6 base 26 k tmpwatch x86_64 2.9.16-4.el6 base 31 k xdg-utils noarch 1.0.2-17.20091016cvs.el6 base 58 k xml-common noarch 0.6.3-32.el6 base 9.5 k xorg-x11-font-utils x86_64 1:7.2-11.el6 base 75 k xz x86_64 4.999.9-0.3.beta.20091007git.el6 base 137 k xz-lzma-compat x86_64 4.999.9-0.3.beta.20091007git.el6 base 16 k Transaction Summary ============================================================================================================================================================================================================================================= Install 62 Package(s) Is it safe to continue and install all that or could I break something already installed?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >