Search Results

Search found 40159 results on 1607 pages for 'multiple users'.

Page 334/1607 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Add a multiple buttons to a view programatically, call the same method, determine which button it wa

    - by just_another_coder
    I want to programatically add multiple UIButtons to a view - the number of buttons is unknown at compile time. I can make one or more UIButton's like so (in a loop, but shorted for simplicity): UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [button addTarget:self action:@selector(buttonClicked:) forControlEvents:UIControlEventTouchDown]; [button setTitle:@"Button x" forState:UIControlStateNormal]; button.frame = CGRectMake(100.0, 100.0, 120.0, 50.0); [view addSubview:button]; Copied/Edited from this link: http://stackoverflow.com/questions/1378765/how-do-i-create-a-basic-uibutton-programmatically But how do I determine in buttonClicked: which button was clicked? I'd like to pass tag data if possible to identify the button.

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Can this PHP function be improved?

    - by jasondavis
    Below is some code I am working on for a navigation menu, if you are on a certain page, it will add a "current" css class to the proper tab. I am curious if there is a better way to do this in PHP because it really seems like a lot of code to do such a simple task? My pages will also have a jquery library already loaded, would it be better to set the tab with jquery instead of PHP? Any tips appreciated <?PHP active_header('page identifier goes here'); //ie; 'home' or 'users.online' function active_header($page_name) { // arrays for header menu selector $header_home = array('home' => true); $header_users = array( 'users.online' => true, 'users.online.male' => true, 'users.online.female' => true, 'users.online.friends' => true, 'users.location' => true, 'users.featured' => true, 'users.new' => true, 'users.browse' => true, 'users.search' => true, 'users.staff' => true ); $header_forum = array('forum' => true); $header_more = array( 'widgets' => true, 'news' => true, 'promote' => true, 'development' => true, 'bookmarks' => true, 'about' => true ); $header_money = array( 'account.money' => true, 'account.store' => true, 'account.lottery' => true, 'users.top.money' => true ); $header_account = array('account' => true); $header_mail = array( 'mail.inbox' => true, 'mail.sentbox' => true, 'mail.trash' => true, 'bulletins.post' => true, 'bulletins.my' => true, 'bulletins' => true ); // set variables if there array value exist if (isset($header_home[$page_name])){ $current_home = 'current'; }else if (isset($header_users[$page_name])){ $current_users = 'current'; }else if (isset($header_forum[$page_name])){ $current_forum = 'current'; }else if (isset($header_more[$page_name])){ $current_more = 'current'; }else if (isset($header_money[$page_name])){ $current_money = 'current'; }else if (isset($header_account[$page_name])){ $current_account = 'current'; }else if (isset($header_mail[$page_name])){ $current_mail = 'current'; } // show the links echo '<li class="' . (isset($current_home) ? $current_home : '') . '"><a href=""><em>Home</em></a></li>'; echo '<li class="' . (isset($current_users) ? $current_users : '') . '"><a href=""><em>Users</em></a></li>'; echo '<li class="' . (isset($current_forum) ? $current_forum : '') . '"><a href=""><em>Forum</em></a></li>'; echo '<li class="' . (isset($current_more) ? $current_more : '') . '"><a href=""><em>More</em></a></li>'; echo '<li class="' . (isset($current_money) ? $current_money : '') . '"><a href=""><em>Money</em></a></li>'; echo '<li class="' . (isset($current_account) ? $current_account : '') . '"><a href=""><em>Account</em></a></li>'; echo '<li class="' . (isset($current_mail) ? $current_mail : '') . '"><a href=""><em>Mail</em></a></li>'; } ?>

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • The Unintended Consequences of Sound Security Policy

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Author: Kevin Moulton, CISSP, CISM Meet the Author: Kevin Moulton, Senior Sales Consulting Manager, Oracle Kevin Moulton, CISSP, CISM, has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East Enterprise Security Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. When I speak to a room of IT administrators, I like to begin by asking them if they have implemented a complex password policy. Generally, they all nod their heads enthusiastically. I ask them if that password policy requires long passwords. More nodding. I ask if that policy requires upper and lower case letters – faster nodding – numbers – even faster – special characters – enthusiastic nodding all around! I then ask them if their policy also includes a requirement for users to regularly change their passwords. Now we have smiles with the nodding! I ask them if the users have different IDs and passwords on the many systems that they have access to. Of course! I then ask them if, when they walk around the building, they see something like this: Thanks to Jake Ludington for the nice example. Can these administrators be faulted for their policies? Probably not but, in the end, end-users will find a way to get their job done efficiently. Post-It Notes to the rescue! I was visiting a business in New York City one day which was a perfect example of this problem. First I walked up to the security desk and told them where I was headed. They asked me if they should call upstairs to have someone escort me. Is that my call? Is that policy? I said that I knew where I was going, so they let me go. Having the conference room number handy, I wandered around the place in a search of my destination. As I walked around, unescorted, I noticed the post-it note problem in abundance. Had I been so inclined, I could have logged in on almost any machine and into any number of systems. When I reached my intended conference room, I mentioned my post-it note observation to the two gentlemen with whom I was meeting. One of them said, “You mean like this,” and he produced a post it note full of login IDs and passwords from his breast pocket! I gave him kudos for not hanging the list on his monitor. We then talked for the rest of the meeting about the difficulties faced by the employees due to the security policies. These policies, although well-intended, made life very difficult for the end-users. Most users had access to 8 to 12 systems, and the passwords for each expired at a different times. The post-it note solution was understandable. Who could remember even half of them? What could this customer have done differently? I am a fan of using a provisioning system, such as Oracle Identity Manager, to manage all of the target systems. With OIM, and email could be automatically sent to all users when it was time to change their password. The end-users would follow a link to change their password on a web page, and then OIM would propagate that password out to all of the systems that the user had access to, even if the login IDs were different. Another option would be an Enterprise Single-Sign On Solution. With Oracle eSSO, all of a user’s credentials would be stored in a central, encrypted credential store. The end-user would only have to login to their machine each morning and then, as they moved to each new system, Oracle eSSO would supply the credentials. Good-bye post-it notes! 3M may be disappointed, but your end users will thank you. I hear people say that this post-it note problem is not a big deal, because the only people who would see the passwords are fellow employees. Do you really know who is walking around your building? What are the password policies in your business? How do the end-users respond?

    Read the article

  • How to deal with the need to know multiple programming languages? When to stop learning new languages?

    - by Raphael
    I am a relatively young programmer. I am 23 and I have been programming professionally for about 5 years. As most programmers I started with C, learned some x86 assembly for fun and then I found C++ which turned out to be my greatest passion in the programming world. Programming with C and C++ forces you to learn platform specific APIs, libs and frameworks all of each requires constant study and experimentation. After some time I had to move on to Java and C# as the demand on my region is basically for these languages. With these languages I entered the world of web development and then I had to learn javascript. Developing for the .NET Framework was exciting at first but I constantly felt as I was getting tied up by Microsoft (and of course the .NET Framework was driving me away from Linux). For desktop development I could do pretty much everything I did with .NET using C++ with Qt but for web development I had to look for an alternative. Quickly I found Django and then I proceeded to learn Python so I could use Django. Nowadays I am learning iOS development with Objective-C. So far it was pretty much easy to learn all these languages (C++ trained me well) but I am worried that someday I won't be able to keep track of them all. Just to clarify. The only languages I learned cause I had to were C# and Java. All of the others I learned for fun, because I love programming and learning new things. Also I like to keep my skills sharp on desktop, web and mobile development. My question is: How do you keep track of multiple programming languages? (I mean, keep track of changes to these languages and keep your skills sharp) and: Is there such a thing as enough programming languages?

    Read the article

  • C#.NET: How to update multiple .NET pages when a particular event occurs in one .Net page? In another words how to use Observer pattern(Publish and subscribe to events)

    Problem: Suppose you have a scenario in which you have to update multiple pages when an event occurs in main page. For example imagine you have a main page where you are dispalying a tab control. This tab control has 3 tab pages where you are loading 3 different user controls. On click of an update button in main page imagine if you have do something in all the 3 tab panels. In other words an event in main page has to be handled in many other pages. An event in main page which contains the tab control has to be handled in all the tab panels(user controls) Answer: Use Observer pattern Define a base page for the page that contains the tab control. Main page which contains the tab: Baseline_Baseline Basepage for the above main page: BaselineBasePage User control that has to be udpated for an event in main page: Baseline_PriorNonDeloitte Source Code: public class BaselineBasePage : System.Web.UI.Page { IList lstControls = new List(); public void Add(IObserver userControl) { lstControls.Add(userControl); } public void Remove(IObserver userControl) { lstControls.Remove(userControl); } public void RemoveAllUserControls() { lstControls.Clear(); } public void Update(SaveEventArgs e) { foreach (IObserver LobjControl in lstControls) { LobjControl.Save(e); } } } public interface IObserver { void Update(SaveEventArgs e); } public partial class Baseline_Baseline : BaselineBasePage { . . . this.Add(_ucPI); this.Add(_ucPI1); protected void abActionBar_saveClicked(object sender, EventArgs e) { SaveEventArgs se = new SaveEventArgs(); se.TabType = (BaselineTabType)tcBaseline.ActiveTabIndex; this.Update(se); } } Public class Baseline_PriorNonDeloitte : System.Web.UI.UserControl,IObserver { public void Update(SaveEventArgs e) { } } More info at: http://www.dofactory.com/Patterns/PatternObserver.aspx span.fullpost {display:none;}

    Read the article

  • C#.NET: How to update multiple .NET pages when a particular event occurs in one .Net page? In another words how to use Observer pattern(Publish and subscribe to events)

    Problem: Suppose you have a scenario in which you have to update multiple pages when an event occurs in main page. For example imagine you have a main page where you are dispalying a tab control. This tab control has 3 tab pages where you are loading 3 different user controls. On click of an update button in main page imagine if you have do something in all the 3 tab panels. In other words an event in main page has to be handled in many other pages. An event in main page which contains the tab control has to be handled in all the tab panels(user controls) Answer: Use Observer pattern Define a base page for the page that contains the tab control. Main page which contains the tab: Baseline_Baseline Basepage for the above main page: BaselineBasePage User control that has to be udpated for an event in main page: Baseline_PriorNonDeloitte Source Code: public class BaselineBasePage : System.Web.UI.Page { IList lstControls = new List(); public void Add(IObserver userControl) { lstControls.Add(userControl); } public void Remove(IObserver userControl) { lstControls.Remove(userControl); } public void RemoveAllUserControls() { lstControls.Clear(); } public void Update(SaveEventArgs e) { foreach (IObserver LobjControl in lstControls) { LobjControl.Save(e); } } } public interface IObserver { void Update(SaveEventArgs e); } public partial class Baseline_Baseline : BaselineBasePage { . . . this.Add(_ucPI); this.Add(_ucPI1); protected void abActionBar_saveClicked(object sender, EventArgs e) { SaveEventArgs se = new SaveEventArgs(); se.TabType = (BaselineTabType)tcBaseline.ActiveTabIndex; this.Update(se); } } Public class Baseline_PriorNonDeloitte : System.Web.UI.UserControl,IObserver { public void Update(SaveEventArgs e) { } } More info at: http://www.dofactory.com/Patterns/PatternObserver.aspx span.fullpost {display:none;}

    Read the article

  • Cisco ASA 5510 ASDM: Setting up multiple public static ip addresses on a single interface and route

    - by ssjaken
    HI, i have a cisco ASA 5510 using ASDM version 6.3 We have a webserver that is been written very specifically and i was given super direct "DO NOT DEVIATE" directions. This server has to get traffic from 3 different PUBLIC ip's that we own. (our isp gave use a block of 12 static addresses) on 4 different ports. there are the directions i was given externalIP1:22 - 172.17.5.50:22 - SSH externalIP1:443 - 172.17.5.50:23040 - SIT externalIP2:443 - 172.17.5.50:33040 - STAGE externalIP3:443 - 172.17.5.50:43040 - PROD My first question is, using ASDM (my contract employer demands i use ASDM over CLI) how do i get three public addresses to work on one interface. We are authenticating on PPPoE. I know create a virtual interface with the static address but when i do i cannot ping the address from another offsite machine. secondly, where would i put the traffic redirect in. would i go ahead and create ACL's or just make NAT routes. Thanks.

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • scp error: "Permission denied (publickey). lost connection"

    - by Winston C. Yang
    I tried to scp an svn dump to savannah, but I got the following error at the end. Permission denied (publickey). lost connection The scp command and verbose output are below. Any ideas? [wcyang@be2-wireless-pittnet-60-37 ~]$ scp -v diffcolor-dump.bz2 [email protected]:/srv/download/diffcolor/ Executing: program /usr/bin/ssh host dl.sv.gnu.org, user wcyang, command scp -v -t /srv/download/diffcolor/ OpenSSH_5.2p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to dl.sv.gnu.org [140.186.70.73] port 22. debug1: Connection established. debug1: identity file /Users/wcyang/.ssh/identity type -1 debug1: identity file /Users/wcyang/.ssh/id_rsa type 1 debug1: identity file /Users/wcyang/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'dl.sv.gnu.org' is known and matches the RSA host key. debug1: Found key in /Users/wcyang/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/wcyang/.ssh/identity debug1: Offering public key: /Users/wcyang/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/wcyang/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). lost connection

    Read the article

  • Why can't I ssh into my server using my private key?

    - by user61342
    I just setup my new server as I used to, and this time I can't login using my private key. The server is ubuntu 11.04. And I have setup following ssh key directories. root@myserv: ls -la drwx------ 2 root root 4096 Sep 23 03:40 .ssh And in .ssh directory, I have done chmod 640 authorized_keys Here is the ssh connection tracebacks: OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to [my.server.ip] [[my.server.ip]] port 22. debug1: Connection established. debug1: identity file /Users/john/.ssh/id_rsa type -1 debug1: identity file /Users/john/.ssh/id_rsa-cert type -1 debug1: identity file /Users/john/.ssh/id_dsa type 1 debug1: identity file /Users/john/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-1ubuntu3 debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ef:b8:8f:b4:fc:a0:57:7d:ce:50:36:17:37:fa:f7:ec debug1: Host '[my.server.ip]' is known and matches the RSA host key. debug1: Found key in /Users/john/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/john/.ssh/id_rsa debug1: Offering RSA public key: /Users/john/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password root@[my.server.ip]'s password: Update: I have found the reason but I can't explain it yet. It is caused by uploading the key using rsync -chavz instead of scp, after I used scp to upload my key, the issue is gone. Can someone explain it? Later, I tried rsync -chv, still not working

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • can I bundle multiple installs for Mac OSX and do them as a single script?

    - by Dov
    I have a lot of open source software to be installed for a course. We currently run on PCs that we provide. If we allow students to use their own Macs in Mac-centric schools, that means we have to load the software on those Macs. Rathern than have to load individual software, is there any way I can create a single file, mount it and run a script to install all packages? We are willing to simplify the installs by standardizing the locations to store the applications, since the students will have identical machines.

    Read the article

  • User can't SFTP after chroot

    - by Dauntless
    Ubuntu 10.04.4 LTS I'm trying to chroot the user 'sam'. According to all the tutorials out there this should work, but apparently I'm still doing something wrong. The user: sam:x:1005:1006::/home/sam:/bin/false I changed /etc/ssh/sshd_config like this (at the bottom of the file): #Subsystem sftp /usr/lib/openssh/sftp-server # CHROOT JAIL Subsystem sftp internal-sftp Match group users ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I added sam to the users group: $groups sam sam : sam users I changed the permissions for sam's home folder: $ ls -la /home/sam drwxr-xr-x 11 root root 4096 Sep 23 16:12 . drwxr-xr-x 8 root root 4096 Sep 22 16:29 .. drwxr-xr-x 2 sam users 4096 Sep 23 16:10 awstats drwxr-xr-x 3 sam users 4096 Sep 23 16:10 etc ... drwxr-xr-x 2 sam users 4096 Sep 23 16:10 homes drwxr-x--- 3 sam users 4096 Sep 23 16:10 public_html I restarted ssh and now sam can't log in with SFTP. The session is created, but also closed immediately: Sep 24 12:55:15 ... sshd[9917]: Accepted password for sam from ... Sep 24 12:55:15 ... sshd[9917]: pam_unix(sshd:session): session opened for user sam by (uid=0) Sep 24 12:55:16 ... sshd[9928]: subsystem request for sftp Sep 24 12:55:17 ... sshd[9917]: pam_unix(sshd:session): session closed for user sam Cyberduck says Unexpected end of sftp stream. and other clients give similar errors. What did I forget / what is going wrong? Thanks!

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • Real server, Multiple IP Addresses, HyperV Virtual Server, How to partition IPs across real and Virtual NICs

    - by Steven_W
    This is a slightly difficult problem to explain without same basic background information - I'll try and refine the question later as necessary Originally, I have a single hosted server (Win 2008R2) with the following range of 8 IP addresses. - Single NIC - IP: x.x.128.72 -> x.x.128.79 - Subnet: x.x.255.192 - GW: x.x.128.65 After installing Hyper-V and setting up a single virtual server on the same box, I then wanted to assign one of the IP addresses to the virtual server, leaving everything else running normally. -- Firstly, I tried using the "External" network, but (even after setting IPs on the "Virtual Adapter" similar to Here but struggled to get networking running at all. I needed to keep the server running (otherwise I would have spent more time pursuing this approach) Q1 ... Was this a sensible thing to do ? Should I have carried on down this route ? -- I then decided to try different approach - Set the HyperV network to "Internal" (visible to Management OS) - Physical NIC - IP: x.x.128.72 -> x.x.128.75 - Subnet: x.x.255.192 - GW: x.x.128.65 - Virtual NIC - IP: x.x.128.78 - Subnet: x.x.255.252 - GW: x.x.128.72 ... { The same as the IP of the physical NIC ) - Virtual OS-NIC - IP: x.x.128.77 - Subnet: x.x.255.252 - GW: x.x.128.78 ... { The same as the IP of the host virtual-NIC ) -- Surprisingly enough, this approach actually worked, and I was able to connect from all the following: - Internet to/from physical NIC (x.x.128.72) - physical NIC (x.x.128.72) to virtual-OS-NIC (x.x.128.77) e.g. testing via ping + FTP - Internet to/from virtual-OS-NIC (x.x.128.72) -- The problem I have is that this approach seems to only last for a short while (a few hours). After this time, it seems that I lose the ability to connect from Virtual-OS-NIC to/from the internet (but I can still connect from the host-OS to the virtual-OS and from the host-OS to the internet) I have re-tested this a couple of times with the same results ... I leave the server on for a few hours (e.g. overnight), and when I come back in the morning, the Virtual-OS loses the ability to route to the internet -- I'm not quite sure what to look at next (or whether I'm going about this completely the wrong way ) One "possible relevant item" is that the host-OS is also running RRAS (Routing and Remote Access), but this is only to run a simple VPN -- Q2 - Wheat should I be looking at next ? (Any good references / recommendations of what to try) Would appreciate any thoughts or comments (even if you tell me I'm going about this the wrong way)

    Read the article

  • ejb testing issues with netbeans and openejb

    - by SibzTer
    I have created a netbeans 6.7 EnterpriseApplication project with ejb and war modules with a test stateless session ejb with a simple sayHello() method. I also added the openEjb library in order to unit test the ejb. Everything runs fine except that I keep getting the following error: Testsuite: com.myapp.test.NewEmptyJUnitTest Apache OpenEJB 3.1.1 build: 20090530-06:18 http://openejb.apache.org/ INFO - openejb.home = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - openejb.base = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service) INFO - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager) INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Found EjbModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Beginning load: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Configuring enterprise application: classpath.ear WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant-launcher.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: xml-resolver-1.2.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: webservices-tools.jar java.lang.Exception: Could not load 1/0/com/sun/codemodel/CodeWriter.class at org.apache.xbean.finder.ClassFinder.readClassDef(ClassFinder.java:730) .... Turns out that I am getting the glassfish library webservices-tools.jar from somewhere somehow and I cant find out how to get rid of it so that I dont get the bunch of Exceptions whenever I try to run any junit tests. Has anyone faced this issue before? Can you help me resolve it please? Thanks.

    Read the article

  • Zabbix Server with Multiple NIC (one on different VLAN) - Monitor a host from both NIC?

    - by Joshua Enfield
    Basically we have many of servers configured for internal use only. I want to ensure the internal services are preserved as internal by checking a host using the local subnet (allowed - this checks if services are up and working), and that the internal services are indeed internal (make sure the services are "down" when checking from different subnet (vlan)) Is there an easy way to do this in Zabbix?

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >