Search Results

Search found 18841 results on 754 pages for 'path finding'.

Page 470/754 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed file and desktop installation CD?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What am I missing? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct. Any ideas?

    Read the article

  • Retrieving Chrome History and History Archives for Mac

    - by Justin
    I've read some articles here that suggest that we are able to retrieve Chrome's history as SQLite databases: Export history from chome browser How can I view "archived" Google Chrome history — i.e. history older than three months? I can't seem to find the folder where these are supposed to be located. In the first article, the provided path was ~/Library/Application\ Support/Google/Chrome/Default/History, but there is no such directory in my filesystem. I am using Chrome 26.0.1410.65 on Mac OS X Lion. And I am signed in with my Google account. Article http://unlockforus.blogspot.it/2008/09/how-opening-google-chrome-files-history.html provided in the last of the above is very useful, but it is only for Windows. Is there something similar to that but for Mac? Is there a way to find the folder where Chrome stores my history?

    Read the article

  • Installing MySQL without root access

    - by vinay
    I am trying to install MySQL without root permissions. I ran through the following steps: Download MySQL Community Server 5.5.8 Linux - Generic Compressed TAR Archive Unpack it, for example to: /home/martin/mysql Create a my.cnf file in your home directory. The file contents should be: [server] user=martin basedir=/home/martin/mysql datadir=/home/martin/sql_data socket=/home/martin/socket port=3666 Go to the /home/martin/mysql directory and execute: ./scripts/mysql_install_db --defaults-file=~/my.cnf --user=martin --basedir=/home/martin/mysql --datadir=/home/martin/sql_data --socket=/home/martin/socket Your MySQL server is ready. Start it with this command: ./bin/mysqld_safe --defaults-file=~/my.cnf & When I try to change the password of MySQL it gives the error: Cannot connect to mysql server through socket '/tmp/mysql.sock' How can I change this path and see whether the mysql.sock is created or not?

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • I can't add PPA repository behind the proxy (with @ in the username)

    - by kenorb
    I'm trying to add the ppa repository (as a root) with the following command: export HTTP_PROXY="http://[email protected]:[email protected]:8080" add-apt-repository ppa:nilarimogard/webupd8 Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 84, in get_ppa_info_from_lp curl.perform() pycurl.error: (56, 'Received HTTP code 407 from proxy after CONNECT') Unfortunately it doesn't work. Looks like curl is connecting to the proxy, but the proxy says that Authentication is Required. I've tried with .curlrc, http_proxy env instead, but it doesn't work. strace -e network,write -s1000 add-apt-repository ppa:nilarimogard/webupd8 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, 16) = -1 EINPROGRESS (Operation now in progress) getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, [16]) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(46025), sin_addr=inet_addr("161.20.75.220")}, [16]) = 0 sendto(4, "CONNECT launchpad.net:443 HTTP/1.1\r\nHost: launchpad.net:443\r\nUser-Agent: PycURL/7.22.0\r\nProxy-Connection: Keep-Alive\r\nAccept: application/json\r\n\r\n", 146, MSG_NOSIGNAL, NULL, 0) = 146 recvfrom(4, "HTTP/1.1 407 Proxy Authentication Required\r\nProxy-Authenticate: BASIC realm=\"proxy\"\r\nCache-Control: no-cache\r\nPragma: no-cache\r\nContent-Type: text/html; charset=utf-8\r\nProxy-Connection: close\r\nSet-Cookie: BCSI-CS-91b9906520151dad=2; Path=/\r\nConnection: close\ Maybe it's because there is @ sign in the username? Wget works with proxy fine. Related: How do I add a repository from behind a proxy? Environment Ubuntu 12.04 curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 curl Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

    Read the article

  • Install Oracle Drive and TNS for Windows XP?

    - by David.Chu.ca
    I am building a box with Windows XP with some applications. One application requires connection to an Oracle database on remote. I have installed OracleXEClient.exe from Oracle download. The installation does install "Oracle Provider for OLE DB" driver. My problem is that I still cannot make connections to the remote Oracle db. The test I have done is to create a UDL file with Oracle provider OLE DB connection. The error message is: --------------------------- Microsoft Data Link Error --------------------------- Test connection failed because of an error in initializing provider. ORA-12154: TNS:could not resolve the connect identifier specified I think I may miss TNSNAMEC.ora in the box. I can find this file from another box where Oracle connection works fine. I am not sure what package I should install (from Oracle) so that the default TNSNAEMES.ora will be installed with related files and setup path for accessing the TNS file?

    Read the article

  • How do I get KLIPS or NETKEY on 11.10 server?

    - by Incognito
    I'm attempting to run OpenSWAN on my Ubuntu11.10 server. All I've done so far is install openswan from the package manager and attempt to set up conf files. However, IPSec support seems to be broken, thus OpenSWAN can't do it's thing. Attempt to start IPSec $ sudo ipsec setup --start ipsec_setup: Starting Openswan IPsec 2.6.28... ipsec_setup: No KLIPS support found while requested, desperately falling back to netkey ipsec_setup: Even NETKEY support is not there, aborting Verify IPSec $ sudo ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K(no kernel code presently loaded) Checking for IPsec support in kernel [FAILED] Checking that pluto is running [FAILED] whack: Pluto is not running (no "/var/run/pluto/pluto.ctl") Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] IPSec Version $ sudo ipsec version Linux Openswan U2.6.28/K(no kernel code presently loaded) See `ipsec --copyright' for copyright information. Linux build: $ uname -a Linux metabox 2.6.18-028stab092.1 #1 SMP Wed Jul 20 19:47:12 MSD 2011 x86_64 x86_64 x86_64 GNU/Linux How can I go about correcting this problem with IPSec? This is a hosted VPS, and I'd like to avoid a kernel rebuild if I can find some other alternative.

    Read the article

  • Clickonce applications being allowed through firewall

    - by DanielF
    The company I work for uses an application that is deployed through clickonce. This means that everytime they update said program the program regenerates in a new location ie: C:\Users\%username%\AppData\Local\Apps\2.0\89XCT43D.10T\WBZ8WE7L.927\trm8..tion_1ad39bb503bcb5df_0008.0005_14829906c9aa8611\%appname%.exe Recently the computers were updated to windows 7 machines and now everytime an update is pushed out the windows firewall see this as a new application and prompts the users to allow the program through the firewall. THis means I have to walk around to every computer and type in admin credentials. Does anyone know of a way to allow the application regardless of the path it is in? Google has not been very helpful so far.

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • What does NT_STATUS_BAD_NETWORK_NAME mean in Samba?

    - by Neil
    I set up a share like this: [global] security = user map to guest = Bad Password usershare allow guests = yes [vms] comment = VirtualBox Virtual Machines path = /home/neil/VirtualBox/HardDisks guest ok = yes read only = yes And when I access the share as myself, and type in my password, it works fine: $ smbclient //neil-ubuntu/vms -U neil Enter neil's password: Domain=[SHUTTERSTOCK] OS=[Unix] Server=[Samba 3.4.0] smb: \> But when I access it as guest, it doesn't work: $ smbclient //neil-ubuntu/vms -U guest Enter guest's password: Domain=[SHUTTERSTOCK] OS=[Unix] Server=[Samba 3.4.0] tree connect failed: NT_STATUS_BAD_NETWORK_NAME Regardless of what password I type in. Does anyone know why? Also, why does smbclient print such useless error messages?

    Read the article

  • How to learn ASP.NET MVC without learning ASP.NET Web forms

    - by Naif
    First of all, I am not a web developer but I can say that I understand in general the difference between PHP, ASP.NET, etc. I have played a little with ASP.NET and C# as well, however, I didn't continue the learning path. Now I'd like to learn ASP.NET MVC but there is no a book for a beginner in ASP.NET MVC so I had a look at the tutorials but it seems that I need to learn C# first and SQL Server and HTML, am I right? So please tell me how can I learn ASP.NET MVC directly (I mean without learning ASP.NET Web forms). What do I need to learn (You can assume that I am an absolute beginner). Update: It is true that i can find ASP.NET MVC tutorial that explain ASP.NET MVC, but I used to find ASP.NET web forms books that explain SQL and C# at the same time and take you step by step. In ASP.NET MVC I don't know how can I start! How can I learn SQL in its own and C# in its own and then combine them with ASP.NET MVC!

    Read the article

  • Ubuntu can't find the correct max resolution with Samsung SyncMaster SA300

    - by fatmatto
    i decided to install ubuntu also on my desktop PC (Windows has been exorcised from my life) but i am having some problems i didn't have with previous hardware configurations. My display is a Samsung SyncMaster SA300, on windows vista the maximum resolution (1920x1080) worked well, but now, ubuntu (after installing fglrx drivers) tells me that the maximum resolution is 1600x1200 I googled a lot last night, and i found a lot of people solving this (on different displays though) with xrandr. I was not able to do it, because xrandr keep complaining "you goddamn maximum resolution is 1600x1600". What xranrd clean command say is: mattia@fatdesktop:~$ xrandr Screen 0: minimum 320 x 200, current 1600 x 1200, maximum 1600 x 1600 DFP1 disconnected (normal left inverted right x axis y axis) CRT1 disconnected (normal left inverted right x axis y axis) CRT2 connected 1600x1200+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1600x1200 60.0*+ 1400x1050 60.0 1280x1024 60.0 47.0 43.0 1440x900 59.9 1280x960 60.0 1280x800 60.0 1152x864 60.0 47.0 43.0 1280x768 59.9 56.0 1280x720 60.0 50.0 1024x768 60.0 43.5 800x600 60.3 56.2 47.0 720x576 50.0 720x480 60.0 640x480 60.0 TV disconnected (normal left inverted right x axis y axis) CV disconnected (normal left inverted right x axis y axis) Then according to other internet posts and forums: mattia@fatdesktop:~$ cvt 1920 1080 60 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync So now i have to add that modeline mattia@fatdesktop:~$ xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync mattia@fatdesktop:~$ xrandr --addmode CRT2 1920x1080_60.00 And here comes the pain: mattia@fatdesktop:~$ xrandr --output CRT2 --mode 1920x1080_60.00 xrandr: **screen cannot be larger than 1600x1600 (desired size 1920x1080)** See? screen cannot be larger than 1600x1600 (desired size 1920x1080) At this point, the 1920x1080 option appears inside the resolution choice menu (the graphical one). But last night, when i tried to select it, my screen went black, and i had to power off the pc. Any clues? am i on the wrong path?

    Read the article

  • The procedure entry php_mb_check_encoding_list could not be located

    - by BlackFire27
    I get this error. The procedure entry php_mb_check_encoding_list could not be located in the dynamic link library php_mbstring.dll The error is related to the php.ini settings, I think.. It finds it difficult to locate the extension I suspect, I put an environment path to the php.exe and its extensions folder, but when I run the command line and call php.exe, that error is being thrown. I use windows. I have got this set in my php.ini. ; On windows: extension_dir = "C:\xampp\php\ext" I installed xampp.. the php version is the latest.. Another thing which is weird is when I try to print out: phpinfo..nothing is being printed out <?php echo phpinfo(); ?> nothing comes out?!?! The question is why and how can I prevent it from being thrown..

    Read the article

  • Crontab script on Mac OS X Lion does not work anymore

    - by Nopster
    I have a problem with cron tasks. Previously this script worked fine on Mac OS X 10.6 server, but when I initialize it on Lion (client), this script stopped working. Basically, this .bat file calls a jar file (that invokes a loop of mysqldump commands) to backup several databases on several servers, and runs perfectly if launched by the shell. cd /Users/nameoftheuser/Desktop/backupper /usr/bin/java -cp .:Backupper.jar:lib/mail.jar backupper.Main "/Users/nameoftheuser/Desktop/backupper/listasiti.txt" "/Users/nameofthe/Desktop/backupper/config.properties But if the cron launches the same .bat file, the generated database backups are 0 bytes. The cron entry is: 0 0 sh /Users/path/to/file.bat I believe that the problem is that cron doesn't run as root. Or what else?

    Read the article

  • OSX Lion - sh: mysql command not found

    - by mkk
    I have pretty simple issue I believe. I have successfully installed mysql etc. I do not have any problems with running "mysql" command from terminal [ bash ]. Suddenly, when my application tried to run mysql command I got error: sh: mysql: command not found.. In terminal, when I type sh and then mysql I can log in to mysql without any problems. I have added mysql to PATH in .bash_profile and I suspect this is why sh cannot see it. I have cp .bash_profile to .profile but it did not do the trick, any ideas how can I fix this issue?

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • yum issue - error msg

    - by Monkey
    i am using oracle linux server 6.2. yum does not work. a manual wget was already used according to https://blogs.oracle.com/OTNGarage/entry/how_to_subscribe_to_the . there is always something about dropbox. yum update firefox Loaded plugins: refresh-packagekit, security http://linux.dropbox.com/fedora/6Server/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: Dropbox. Please verify its path and try again does anybody know a workaround?

    Read the article

  • how change nginx temp & log folder or disable logging completely

    - by Ehsan Khodarahmi
    I'm running nginx 1.3.5 under windows seven, I need to execute nginx directly from a read-only media (CD or DVD), but when I want to run it, it fails with this error: nginx: [alert] could not open error log file: CreateFile() "logs/error.log" fail ed (5: Access is denied) 2012/08/28 13:52:46 [emerg] 5604#2864: CreateDirectory() "J:\nginx-1.3.5/temp/client_body_temp" failed (5: Access is denied) where J is my CD-ROM drive letter. I've changed nginx.conf to disable logging completely, but seems anyway it still tries to build a file named 'error.log' in '/logs' folder & some extra temporary contents in '/temp' folder at the startup, so I want to change 'logs' & 'temp' directory path to windows temp folder (%temp%), but I dont have any idea that how can I do it. Also I want to know why nginx still creates 'logs/error.log' after disableing error logging ?

    Read the article

  • How can I connect JConsole to WebLogic using the WL SSL Listen Port

    - by Mircea Vutcovici
    I would like to be able to use JConsole on remote WebLogic servers via the multiplexer port on SSL. Is it possible this without doing any configuration changes WebLogic? Only by adding some jars (e.g. wljmxclient.jar) or parameters to JConsole. I've tried with variations of the following command without success: $JAVA_HOME/bin/jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:\ $JAVA_HOME/lib/tools.jar:$WL_HOME/server/lib/wljmxclient.jar \ -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug \ service:jmx:rmi:///jndi/iiop://server_name:7441/jmxrmi I think that one of the problem is that the SSL is not enabled in JConsole.

    Read the article

  • Failed to load viewstate.The control tree into which viewstate is being loaded...etc

    - by alaa9jo
    Two days ago,a colleague of mine tried to publish an asp.net website (which is built in VS2008 using framework 3.5) to our server,he configured everything in IIS (he made sure that the selected asp.net version is 2.0) and launched the website..at first it was working great but when he tried to click on a specific treeview...BOOM..: "Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request." In that page there were these control: a TreeView and a Placeholder,when the user selects any node then it's controls will be created dynamically into that placeholder..for the first time it's working fine but when (s)he select another node then that issue appears. He called me to help him with this issue,for me this is the first time I see such an issue,scratch my head then I decided to eliminate the possibilities of this issue one by one,at the development machine it's working perfectly,he published the website at the local IIS and again..it's working perfectly,I took a copy of the website and published it into my laptop but no issues at all,so this is means that it's not an issue in the code. So there is something missing/wrong in our server [it has Windows Server 2003],we went to the server and checked on the web-config and the configurations on IIS...nothing wrong so far,so I decided to check if the framework 3.5 is installed or not and the answer: it wasn't installed Of course he assumed that it was installed and there was nothing to tell if it wasn't from the "ASP.Net version" in IIS because frameworks 3.0 and 3.5 will not be listed there [2.0 will be listed there instead],the only way to check if it was installed or not is to search for the framework in this path:[WINDOWS Folder]\Microsoft.NET\Framework or check if it was installed in Add or remove programs. The obvious solution for his case: We installed Framework 3.5 SP1 into our server,did a restart to the machine and it worked ! If anyone faced the same issue and solved it using the same solution or with a different one please post it here to share experience.

    Read the article

  • Questions, Knowledge Checks and Assessments

    - by ted.henson
    Questions should be used to reinforce concepts throughout the title. You have the option to include questions in the course, in assessments, in Knowledge Checks, or in any combination. Questions are required for creating knowledge checks and assessments. It is important to remember that questions that are not in assessments are not tracked. Be sure to structure your outline so that questions are added to the appropriate assignable unit. I usually recommend that questions appear directly below their relative section. This serves two purposes. First, it helps ensure that the related content and question stay relative to one another. Secondly, it ensures that when the "link to subject" option is used it will relate back to the relative content. Knowledge checks are created using the questions that have been added to the related assignable unit. Use Knowledge Checks to give users an additional opportunity to review what they have learned. Knowledge Check allows users to check their own knowledge without being tracked or scored. Many users like having this self check option, especially if they know they are going to be tested later. Each assignable unit can have its own Knowledge Check. Assessments provide a way to measure knowledge or understanding of the course material. The results of each assessment are scored and tracked. Assessments are created using the questions that have been added to the relative assignable unit(s). Each assignable unit, including the Title AU, can have multiple assessments. Consider how your knowledge paths will be structured when planning your assessments. For instance, you can create a multiple-activity knowledge path, with multiple assessments from the same title or assignable unit. Also remember, in Manager an assessment can be either a pre or post assessment. Pre-assessments allow the student to discover what is already known in a specific topic or subject and important if the personal course feature is being used. Post-assessments allow you test the student knowledge or understanding after completing the material.

    Read the article

  • CodePlex Daily Summary for Sunday, December 26, 2010

    CodePlex Daily Summary for Sunday, December 26, 2010Popular ReleasesNoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.Temporary Data Storage Folder: TDS Folder source code: This is latest version 0.2 Beta code. To know the password request at neutron.request@gmail.comNHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????Windows Workflow Foundation on Codeplex: Microsoft.Activities.UnitTesting 1.71: New in this release Episode as Tasks using the Task Parallel Library CHM Help file now included in release. StyleCop compliance is resulting in massive refatoring but should not cause breaking changes Breaking Change DefaultTimeout is now a TimeSpan Removed default parameters using int timeout = DefaultTimeout and added overloads insteadMiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????Ajax ASP.Net Forum: InSeCla Forum Software v0.1.9: *VERSION: 0.1.9* HAPPY CHRISTMAS FEATURES ADDED Added features customizabled per category level (Customize at ADMIN/Categories Tab) Allow Anonymous Threads, Allow Anonymous Post Virtual URLs (friendly urls) has finally added And you can have some forum (category) using virtual urls and other using normal urls. Check !, as this improve the SEO indexing results Moderation Instant On: Delete Thread, Move Thread Available to users being members of moderators or administrators InstantO...VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 2: Compared with SuperSocket 1.3 beta 1, the changes listed below have been done in SuperSocket 1.3 beta 2: added supports for .NET 3.5 replaced Logging Application Block of EntLib with Log4Net improved the code about logging fixed a bug in QuickStart sample project added IPv6 supportTibiaPinger: TibiaPinger v1.0: TibiaPinger v1.0Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...New ProjectsA.I. Semantic Net Tests: A test using "semantic net" artificial intelligence developed in C++.Awesome.ClientID: Awesome.ClientID is a HTTPModule which serializes the ClientID of server controls, and page/usercontrol properties to a Json to make working with JavaScript easier without the need for outputting ClientID's. It's a solution for clean ClientIDs for .NET 2.0/3.5BitTweet: BitTweet is a fast, simple and easy to use Twitter client with tweeting, reading and writing functions included. It is know for being the most secure and fast Twitter client out there! It is written in Visual Basic .NET 2008. ~Tweet in a bit!CarRental001: carCrudo: CRUDO - The MCG (Model-Controller-Generator) CGF (Code Generation Framework) Visit The Project HomePage: http://adityayadav.com/CRUDO_The_MCG_Model_Controller_Generator_CGF_Code_Generation_Framework.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)Effeks: FX trading sample application sing prismEnhanced WebPart: Enhanced Webpart is a webpart with enhanced properties editor. It allows you display properties of most common types, and provides easy way to add custom controls to display your own types in webpart properties editor. Enhanced Webpart fully supports localization.Esteffin's graphic: Esercitazioni grafica 2010farasun: Source code Repository for farasun.wordpress.comGCTF: Desafio .NET Realizado na FACISAImgshow Plugin for Windows Live Writer: By using Imgshow Query you are able to publish multimedia content such as Youtube, Facebook Video, PDF, and other Imgshow supported service.Instant Messenger Using WPF and WCF: instant messenger using wcf and wpfjMenu: Create a dotnetnuke module that cover all kind of jquery menu pluginsMcopy API: Copy all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)MOSSWebServices: This project contains code for interacting with the 21 web services provided by MOSS. This is targeted towards WSS developers This currently contains code to interact with "UserGroups" web service. I will keep adding code to interact with other MOSS web services soon. Nest Hub - Twitter Client: Nest Hub is a free Twitter client for PC. It's developed in VB.NET.Neural Networks Library: Neural networks Library by SefnajParallelism: Parallelism- Unified Parallel & Concurrency Framework Visit the Project HomePage: http://adityayadav.com/Parallelism.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)PDF IRM Protector For SharePoint 2007 And 2010: The PDF IRM Protector controls the conversion of PDF documents to their encrypted, rights-managed format and the decryption of PDF documents from their rights-managed format back to their original format. The project targets both SharePoint 2007 and 2010. PHP Form Validator - Isis: Project Isis simplifies the process of form validation for PHP. Offering a robust flag and function system, allowing you to encode the forms with the correct validation data and be done.Recycle Me: Open Source Project is to help and educate people in regarding reuse and recycling of daily life items. This is a Social project so we need funds for research and surveys. Volunteers are most welcome to contribute and certifications for their participation are available. ThanksSencha ExtJS Import Library for Script#: Script# import library for Sencha Ext JS Javascript Framework Separating Vertices: Given a graph as a collection of edges and vertices, it identifies the separating vertices, biconnected components, and edges. SIG - SISTEMA DE GESTÃO INTERNA: Projeto feito para a empresa MODEMTELECOM, consultoria em telecomunicoções.Smart Card COM: Smart Card COM Library, useful to access smart cards from a Web page in Internet ExplorerSocial Hub: Social HubStock Research: Test project for stocke researchtaokeba: ????????????????????,???????????。Temporary Data Storage Folder: This software creates a folder that stores temporary data that you want to store for temporary tasks. Temporary tasks such as storing a file that is to be deleted after some time. It automatically deleted those files on exit. Learn more at www.neutronsoftwares.weebly.comTukUI Updater: A OpenSource Windows Updater tool for the World of Warcraft(R) addon TukUI Developed in .NET 3.5 Discusion thread are found at the official TukUI forums: http://www.tukui.org/v2/forums/topic.php?id=4982 WebUtil: This is a project for using all of features by the web.

    Read the article

  • Dovecot throws obsolete warnings, even though dovecot.conf updated on Ubuntu 11

    - by John Bowlinger
    In trying to set up SASL for dovecot on Ubuntu 11, I keep getting obsolete warnings in my log: Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: passdb {} has been replaced by passdb { driver= } Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:27: userdb {} has been replaced by userdb { driver= } Even though my dovecot.conf file looks like this: protocols = none auth default { mechanisms = plain login passdb { driver=pam } userdb { driver=passwd } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } Even when I try: driver=etc/pam.d/dovecot driver=etc/passwd I still get the same error. Looking at the example config file: cat /usr/share/doc/dovecot-common/dovecot/example-config/dovecot.conf was of no help. Dovecot is running: ps -A | grep 'dovecot' 9663 ? 00:00:00 dovecot But I can't seem to get that elusive "dovecot-auth" process. Anyone know what's going on?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >