Search Results

Search found 12869 results on 515 pages for 'target exists'.

Page 90/515 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How can I read this parameters inside my software? (c#)

    - by Dezigo
    Hello! I have created a shortcut of my .exe file. I want add to '.exe' extra parameters.(on shortcut: Target attribute) Example Target: "C:\Documents and Settings\dezigo\My Documents\c# programm\DirectoryScanner\DirectoryScanner\DirectoryScanner\bin\Debug\DirectoryScanner.exe" + extra parrams(like a method=1) How can I read this parameters inside my software? (c#) Then ,when starting .exe check if(method == 1) { //do something } else { //do something }

    Read the article

  • CGRect and utesting

    - by killdash10
    I am trying to add unit tests to my iPhone project in Xcode. Everything works, its great. Except when I am adding a class.m that uses CGRect (or other structs, CGPoint etc) to the unit test target (under "Compile Sources") - I am getting a compilation error: "'CGRect' undeclared (first use in this function)". I tried messing with my unit test target in various ways, but so far I haven't been able to get past this. What am I missing?

    Read the article

  • How can I set an intent's class from a string value?

    - by twaxwei
    I am trying to set the class for an intent to the address listed in a string value, so that I can launch a given activity. The string is composed dynamically during runtime. Is there anyway to make something like the code below run: String target=com.test.activity1.class; Intent intent=new intent(); intent.setClass(this, target); Thanks

    Read the article

  • Play video from webserver

    - by Eyla
    Greeting, I'm creating a video play with silverlight 4 and C#. the player is working fine when I set the path for the target folder using this line of code. string FolderPath = this.Context.ApplicationInstance.Server.MapPath(@"~\Video\"); Now I want to place in C:\inetpub\wwwroot and I want to use the IP address of my IIS instead of the folder place. please advice how to modify this line of code to target my IIS. Thank you,

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

  • Few basic Billing facts

    - by Rajesh Sharma
    Quick basic points on Billing: In batch billing, there can be one and ONLY ONE bill for an Account, per Bill Cycle. If an Account has been already billed within the current Bill Cycle's window period, it will not be billed again and will be skipped by the Bill Segment generation program, part of batch eligibility check routine. If an Account does not have any Stopped Service Agreements and you attempt to generate a Bill for that Account that too for a period for which it was already billed, no Bill Segments are generated and a Pending Bill is created for that Account. If a Pending Bill exists for an Account and was generated from a batch, the Account will be re-billed in the next batch run. In contrast, if a Pending Bill exists for an Account and was generated online, the Account will be skipped in the next batch run of the Account's Bill Cycle. Bill generation source, Batch or Online at DB level is determined as following: Batch = CI_BILL.BILL_CYC_CD = {Bill Cycle Code} and CI_BILL.WIN_START_DT = {Window Start Date} Online = CI_BILL.BILL_CYC_CD = "" and CI_BILL.WIN_START_DT IS NULL Bill generation source, Batch or Online from Bill page is determined as following: Batch Online   Closing/Final Bill segment is generated for Stopped Service Agreements and is determined as follows: DB level CI_BSEG.CLOSING_BSEG_SW = "Y" Bill Segment page

    Read the article

  • Diagram that could explain a state machine's code?

    - by Incognito
    We have a lot of concepts in making diagrams like UML and flowcharting or just making up whatever boxes-and-arrows combination works at the time, but I'm looking at doing a visual diagram on something that's actually really complex. State machines like those that parse HTML or regular expressions tend to be very long and complicated bits of code. For example, this is the stateLoop for FireFox 9 beta. It's actually generated by another file, but this is the code that runs. How can I diagram something with the complexity of this in a way that explains flow of the code without taking it to a level where I draw every single line-of-code into it's own box on a flowchart? I don't want to draw "Invoke loop, then return" but I don't want to explain every last detail. What kind of graph is suitable to do this? Is there already something out there similar to this? Just an example of how to do this without going overboard in complexity or too-high-level is really what I want. If you don't feel like looking at the code, basically it's 70 different state flags that could occur, inside an infinite loop that exists to a label based on some conditions, each flag has it's own infinite loop that exists to a label somewhere, and each of those loops has checks for different types of chars, which then runs off into various other methods.

    Read the article

  • Questions about Mac Book Pro Keyboard and shortcuts

    - by SimonSolnes
    I have been researching a lot about this and I cannot seem to find any useful information about this topic, what so ever. I have now been using Ubuntu in one week, and have gotten pretty confident with almost everything. Except keyboard layout and shortcuts. If you know a tutorial or a documents explaining keyboard shortcuts in Ubuntu or linux in general, can you please list it? I use a Mac Book Pro with a Norwegian keyboard, and I have several questions about this: Is there a program for having a consise list of absolutely all keyboard shortcuts, and be able to change them? How do I use my fn keys? (fn button doesnt do the job for some reason) How can I use my alt+letter-key-or-number-key or alt+shift+letter-key-or-number-key to get fancy characters. (Like I do on Mac OS X) How can I swap cmd and ctrl key system wide? Also I really want to be oriented around this subject, since this is the only thing holding me back on Ubuntu, so if there exists some in-depth material on this, it would be great. Also, if there exists some programs or material out there making it easier with Mac hardware I would enjoy that. Sorry if my questions seems vague. Thank you very much Simon

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • What resources are there for creating a dedicated NES emulator box?

    - by normalocity
    Where do I start, and what communities should I get involved in, in order to achieve the following? Ideally, I'd like to have a box that does the following (doesn't have to do this out of the box, I'm just looking to be able to achieve these goals through configs and necessary dependencies): Either bypasses login, or auto login Auto-start FCEUX with options that will (a) automatically start a ROM of my choosing, and (b) go into full-screen mode. You can assume that before I get that far, I've already configured the input devices and video options. I'd like to create (or install, if it exists) a full-screen app that takes a list of ROMs, allows me to select one with a gamepad/arcade stick, and press a button to open that game Be able to map a button on a gamepad/arcade stick to the "Power off" or exit function of the emulator, such that it will take me back to the ROM selection screen. I've already successfully installed FCEUX and tested it with an arcade stick I own, so I'm not looking for an emulator installer guide. I don't know if the ROM selector app exists already, but I'm a Java developer, and could probably create one (so long as it's not too difficult to support controllers - I was thinking of using Slick2D for this - a gaming library that I'm already pretty familiar with). The goal would be a dedicated box that I have connected to my TV. I power it on. It boots up and starts the ROM selection app, which passes the proper parameters to FCEUX (or another emulator that I might switch to at a later time), and I'm ready to go. Basically an NES emulator as a real, living room console. Also, as far as mapping a controller button to functions in the app, well, I've also played around with hardware, and it would be pretty trivial for me to modify a gamepad to trigger key presses. I just don't want to go to that length if it's not necessary.

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • Gnome shell not starting at login, but can start from terminal (Ubuntu 12.04)

    - by Mat Leonard
    I upgraded to Ubuntu 12.04 recently and for some reason it broke Gnome 3. The shell doesn't start up at login. My .xsession-errors looks like this right after I log in: gnome-session[1689]: WARNING: Session 'gnome' runnable check failed: Timed out (gnome-settings-daemon:1744): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:1744): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:1744): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ** Message: moving back from GtkStatusIcon to indicator Then I can run gnome-shell --replace, the shell starts up and everything works. This is what I get immediately after: Window manager warning: Log level 16: Unable to register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject Window manager warning: Log level 16: Error registering polkit authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject (polkit-error-quark 0) (gnome-shell:2442): folks-WARNING **: Failed to find primary PersonaStore with type ID 'eds' and ID 'system'. Individuals will not be linked properly and creating new links between Personas will not work. The configured primary PersonaStore's backend may not be installed. If you are unsure, check with your distribution Also, if I run /usr/lib/nux/unity_support_test -p, everything comes back as Yes and this checks out: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce 8300 GS/PCIe/SSE2 OpenGL version string: 3.3.0 NVIDIA 295.40 It isn't a huge problem since I can get gnome shell to work, but it is a little annoying. So, I'd like to fix this. Thanks for your help.

    Read the article

  • Importing tab delimited file into array in Visual Basic 2013 [migrated]

    - by JaceG
    I am needing to import a tab delimited text file that has 11 columns and an unknown number of rows (always minimum 3 rows). I would like to import this text file as an array and be able to call data from it as needed, throughout my project. And then, to make things more difficult, I need to replace items in the array, and even add more rows to it as the project goes on (all at runtime). Hopefully someone can suggest code corrections or useful methods. I'm hoping to use something like the array style sMyStrings(3,2), which I believe would be the easiest way to control my data. Any help is gladly appreciated, and worthy of a slab of beer. Here's the coding I have so far: Imports System.IO Imports Microsoft.VisualBasic.FileIO Public Class Main Dim strReadLine As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim sReader As IO.StreamReader = Nothing Dim sRawString As String = Nothing Dim sMyStrings() As String = Nothing Dim intCount As Integer = -1 Dim intFullLoop As Integer = 0 If IO.File.Exists("C:\MyProject\Hardware.txt") Then ' Make sure the file exists sReader = New IO.StreamReader("C:\MyProject\Hardware.txt") Else MsgBox("File doesn't exist.", MsgBoxStyle.Critical, "Error") End End If Do While sReader.Peek >= 0 ' Make sure you can read beyond the current position sRawString = sReader.ReadLine() ' Read the current line sMyStrings = sRawString.Split(New Char() {Chr(9)}) ' Separate values and store in a string array For Each s As String In sMyStrings ' Loop through the string array intCount = intCount + 1 ' Increment If TextBox1.Text <> "" Then TextBox1.Text = TextBox1.Text & vbCrLf ' Add line feed TextBox1.Text = TextBox1.Text & s ' Add line to debug textbox If intFullLoop > 14 And intCount > -1 And CBool((intCount - 0) / 11 Mod 0) Then cmbSelectHinge.Items.Add(sMyStrings(intCount)) End If Next intCount = -1 intFullLoop = intFullLoop + 1 Loop End Sub

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • design for supporting entities with images

    - by brainydexter
    I have multiple entities like Hotels, Destination Cities etc which can contain images. The way I have my system setup right now is, I think of all the images belonging to this universal set (a table in the DB contains filePaths to all the images). When I have to add an image to an entity, I see if the entity exists in this universal set of images. If it exists, attach the reference to this image, else create a new image. E.g.: class ImageEntityHibernateDAO { public void addImageToEntity(IContainImage entity, String filePath, String title, String altText) { ImageEntity image = this.getImage(filePath); if (image == null) image = new ImageEntity(filePath, title, altText); getSession().beginTransaction(); entity.getImages().add(image); getSession().getTransaction().commit(); } } My question is: Earlier I had to write this code for each entity (and each entity would have a Set collection). So, instead of re-writing the same code, I created the following interface: public interface IContainImage { Set<ImageEntity> getImages(); } Entities which have image collections also implements IContainImage interface. Now, for any entity that needs to support adding Image functionality, all I have to invoke from the DAO looks something like this: // in DestinationDAO::addImageToDestination { imageDao.addImageToEntity(destination, imageFileName, imageTitle, imageAltText); // in HotelDAO::addImageToHotel { imageDao.addImageToEntity(hotel, imageFileName, imageTitle, imageAltText); It'd be great help if someone can provide me some critique on this design ? Are there any serious flaws that I'm not seeing right away ?

    Read the article

  • How can I properly create /dev/dvd?

    - by chazomaticus
    Certain programs look for /dev/dvd by default to find DVDs. When I first boot my computer without a DVD inserted, /dev/dvd exists and points to the correct place (/dev/sr0). However, when I insert a DVD, /dev/dvd disappears. I'd like it to stick around so I don't have to navigate to /dev/sr0 in programs that are looking for DVDs. How do I ensure that the /dev/dvd symlink exists and points to the right place? It looks like I can add something to /etc/udev/rules.d/70-persistent-cd.rules. This site gives a couple of examples, but the 70-persistent-cd.rules file says "add the ENV{GENERATED}=1 flag to your own rules", which isn't part of the examples. The man 7 udev page is impenetrable to me, and I'm not convinced the linked page gives 100% of the information I need. So, what can I do on a modern, Ubuntu 12.04 (or later) system to make /dev/dvd always exist and point to the right device? EDIT: Is it as simple as adding ENV{GENERATED}=1 to the rules in the linked page, something like this: SUBSYSTEM=="block", KERNEL=="sr0", SYMLINK+="dvd", GROUP="cdrom", ENV{GENERATED}=1 Is that the right information for modern Ubuntu? What is ENV{GENERATED} doing there, when it wasn't generated, but hand-written?

    Read the article

  • Software Center does not load

    - by eim
    I'm having problems with opening my Software center and it just shuts off after loading a few seconds. I can't even get it to the main page of the Software Center. I did try to follow these commands but of no avail: sudo apt-get purge software-center sudo apt-get update sudo apt-get install software-center Instead, I get an error after entering the first command: eim@eim-VAIO:~$ sudo apt-get purge software-cente Reading package lists... Error! E: Encountered a section with no Package: header E: **Problem with MergeList** /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_universe_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried doing this aswell: Run : cd ~/.cache; rm -r software-center (nothing happened) And this: Add /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 to the Startup applications error message: eim@eim-VAIO:~$ /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 Gtk-Message: Not loading module "atk-bridge": The functionality is provided by GTK natively. Please try to not load it. ** (polkit-gnome-authentication-agent-1:3563): WARNING **: Unable to register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject Cannot register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject I think I've done all the possible fix to this problem as suggested on my research. But I can't seem to get this work. Can someone please help? NOTE: Okay... Guess I just found the solution to my problem. I'll just post the answer here since I can't answer my own question yet. Open terminal: sudo rm /var/lib/apt/lists/* -vf sudo apt-get update Now I can open my Software Center! I found the answer here: How do I fix a "Problem with MergeList" error when trying to do an update?

    Read the article

  • c# Unable to open file for reading

    - by Maks
    I'm writing a program that uses FileSystemWatcher to monitor changes to a given directory, and when it recieves OnCreated or OnChanged event, it copies those created/changed files to a specified directorie(s). At first I had problems with the fact that OnChanged/OnCreated events can be sent twice (not acceptable in case it needed to process 500MB file) but I made a way around this and with what I'm REALLY STUCKED with is getting the following IOException: The process cannot access the file 'C:\Where are Photos\bookmarks (11).html' because it is being used by another process. Thus, preventing the program from copying all the files it should. So as I mentioned, when user uses this program he/she specifes monitored directory, when user copies/creates/changes file in that directory, program should get OnCreated/OnChanged event and then copy that file to few other directories. Above error happens in all casess, if user copies few files that needs to owerwrite other ones in folder being monitored or when copying bulk of several files or even sometimes when copying one file in a monitored directory. Whole program is quite big so I'm sending the most important parts. OnCreated: private void OnCreated(object source, FileSystemEventArgs e) { AddLogEntry(e.FullPath, "created", ""); // Update last access data if it's file so the same file doesn't // get processed twice because of sending another event. if (fileType(e.FullPath) == 2) { lastPath = e.FullPath; lastTime = DateTime.Now; } // serves no purpose now, it will be remove soon string fileName = GetFileName(e.FullPath); // copies file from source to few other directories Copy(e.FullPath, fileName); Console.WriteLine("OnCreated: " + e.FullPath); } OnChanged: private void OnChanged(object source, FileSystemEventArgs e) { // is it directory if (fileType(e.FullPath) == 1) return; // don't mind directory changes itself // Only if enough time has passed or if it's some other file // because two events can be generated int timeDiff = ((TimeSpan)(DateTime.Now - lastTime)).Seconds; if ((timeDiff < minSecsDiff) && (e.FullPath.Equals(lastPath))) { Console.WriteLine("-- skipped -- {0}, timediff: {1}", e.FullPath, timeDiff); return; } // Update last access data for above to work lastPath = e.FullPath; lastTime = DateTime.Now; // Only if size is changed, the rest will handle other handlers if (e.ChangeType == WatcherChangeTypes.Changed) { AddLogEntry(e.FullPath, "changed", ""); string fileName = GetFileName(e.FullPath); Copy(e.FullPath, fileName); Console.WriteLine("OnChanged: " + e.FullPath); } } fileType: private int fileType(string path) { if (Directory.Exists(path)) return 1; // directory else if (File.Exists(path)) return 2; // file else return 0; } Copy: private void Copy(string srcPath, string fileName) { foreach (string dstDirectoy in paths) { string eventType = "copied"; string error = "noerror"; string path = ""; string dirPortion = ""; // in case directory needs to be made if (srcPath.Length > fsw.Path.Length) { path = srcPath.Substring(fsw.Path.Length, srcPath.Length - fsw.Path.Length); int pos = path.LastIndexOf('\\'); if (pos != -1) dirPortion = path.Substring(0, pos); } if (fileType(srcPath) == 1) { try { Directory.CreateDirectory(dstDirectoy + path); //Directory.CreateDirectory(dstDirectoy + fileName); eventType = "created"; } catch (IOException e) { eventType = "error"; error = e.Message; } } else { try { if (!overwriteFile && File.Exists(dstDirectoy + path)) continue; // create new dir anyway even if it exists just to be sure Directory.CreateDirectory(dstDirectoy + dirPortion); // copy file from where event occured to all specified directories using (FileStream fsin = new FileStream(srcPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (FileStream fsout = new FileStream(dstDirectoy + path, FileMode.Create, FileAccess.Write)) { byte[] buffer = new byte[32768]; int bytesRead = -1; while ((bytesRead = fsin.Read(buffer, 0, buffer.Length)) > 0) fsout.Write(buffer, 0, bytesRead); } } } catch (Exception e) { if ((e is IOException) && (overwriteFile == false)) { eventType = "skipped"; } else { eventType = "error"; error = e.Message; // attempt to find and kill the process locking the file. // failed, miserably System.Diagnostics.Process tool = new System.Diagnostics.Process(); tool.StartInfo.FileName = "handle.exe"; tool.StartInfo.Arguments = "\"" + srcPath + "\""; tool.StartInfo.UseShellExecute = false; tool.StartInfo.RedirectStandardOutput = true; tool.Start(); tool.WaitForExit(); string outputTool = tool.StandardOutput.ReadToEnd(); string matchPattern = @"(?<=\s+pid:\s+)\b(\d+)\b(?=\s+)"; foreach (Match match in Regex.Matches(outputTool, matchPattern)) { System.Diagnostics.Process.GetProcessById(int.Parse(match.Value)).Kill(); } Console.WriteLine("ERROR: {0}: [ {1} ]", e.Message, srcPath); } } } AddLogEntry(dstDirectoy + path, eventType, error); } } I checked everywhere in my program and whenever I use some file I use it in using block so even writing event to log (class for what I ommited since there is probably too much code already in post) wont lock the file, that is it shouldn't since all operations are using using statement block. I simply have no clue who's locking the file if not my program "copy" process from user through Windows or something else. Right now I have two possible "solutions" (I can't say they are clean solutions since they are hacks and as such not desireable). Since probably the problem is with fileType method (what else could lock the file?) I tried changing it to this, to simulate "blocking-until-ready-to-open" operation: fileType: private int fileType(string path) { FileStream fs = null; int ret = 0; bool run = true; if (Directory.Exists(path)) ret = 1; else { while (run) { try { fs = new FileStream(path, FileMode.Open); ret = 2; run = false; } catch (IOException) { } finally { if (fs != null) { fs.Close(); fs.Dispose(); } } } } return ret; } This is working as much as I could tell (test), but... it's hack, not to mention other deficients. The other "solution" I could try (I didn't test it yet) is using GC.Collect() somewhere at the end of fileType() method. Maybe even worse "solution" than previous one. Can someone pleas tell me, what on earth is locking the file, preventing it from opening and how can I fix that? What am I missing to see? Thanks in advance.

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • ubuntu 10.04; kvm bridged networking not working with public ip addresses

    - by senorsmile
    I have a dedicated hosted server box with ubuntu 10.04 64 bit installed. I would like to run kvm with ubuntu 8.04 installed for some php 5.2 compatible apps(they don't work right with php 5.3, the default in ubuntu 10.04). I installed KVM as instructed at https://help.ubuntu.com/community/KVM/Installation . I installed the vm using virt-manager. I never could figure out how use virt-install or any of those automated installers. I just installed it using the disc. I set up bridged networking as per https://help.ubuntu.com/community/KVM/Networking . However, the bridged connection doesn't work. Here's my /etc/network/interfaces on the host, running ubuntu 10.04. (with specific public ip blanked) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address xx.xx.xx.xx netmask 255.255.255.248 gateway xx.xx.xx.xa bridge_ports eth0 bridge_stp on bridge_fd 0 bridge_maxwait 10 ` Here's my /etc/network/interfaces on the guest, running ubuntu 8.04. auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xy netmask 255.255.255.248 gateway xx.xx.xx.xa The two vm's can communicate to each other. But, the guest vm can't access anyone in the real world. Here's my /etc/libvirt/qemu/store_804.xml <domain type='kvm'> <name>store_804</name> <uuid>27acfb75-4f90-a34c-9a0b-70a6927ae84c</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/store_804.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='52:54:00:26:0b:c6'/> <source bridge='br0'/> <model type='virtio'/> </interface> <console type='pty'> <target port='0'/> </console> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='es1370'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> Any idea where I've gone wrong?

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >