Search Results

Search found 21530 results on 862 pages for 'service pack'.

Page 570/862 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • (libgdx) Button doesn't work

    - by StercoreCode
    At the game I choose StopScreen. At this screen displays button. But if I click it - it doesn't work. What I expect - when I press button it must restart game. At this stage must display at least a message that the button is pressed. I tried to create new and clear project. Main class implement ApplicationListener. I put the same code in the appropriate methods. And it's works! But if i create this button in my game - it doesn't work. When i play and go to the StopScreen, i saw button. But if i click, or touch, nothing happens. I think that the proplem at the InputListener, although i set the stage as InputProcessor. Gdx.input.setInputProcessor(stage); I also try to addListener for Button as ClickListener. But it gave no results. Or it maybe problem that i implements Screen method - not ApplicationListener or Game. But if StopScreen implement ApplicationListener, at the mainGame I can't to setScreen. Just interests question: why button displays but nothing happens to it? Here is the code of StopScreen if it helps find my mistake: public class StopScreen implements Screen{ private OrthographicCamera camera; private SpriteBatch batch; public Stage stage; //** stage holds the Button **// private BitmapFont font; //** same as that used in Tut 7 **// private TextureAtlas buttonsAtlas; //** image of buttons **// private Skin buttonSkin; //** images are used as skins of the button **// public TextButton button; //** the button - the only actor in program **// public StopScreen(CurrusGame currusGame) { camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); batch = new SpriteBatch(); buttonsAtlas = new TextureAtlas("button.pack"); //** button atlas image **// buttonSkin = new Skin(); buttonSkin.addRegions(buttonsAtlas); //** skins for on and off **// font = AssetLoader.font; //** font **// stage = new Stage(); stage.clear(); Gdx.input.setInputProcessor(stage); TextButton.TextButtonStyle style = new TextButton.TextButtonStyle(); style.up = buttonSkin.getDrawable("ButtonOff"); style.down = buttonSkin.getDrawable("ButtonOn"); style.font = font; button = new TextButton("PRESS ME", style); //** Button text and style **// button.setPosition(100, 100); //** Button location **// button.setHeight(100); //** Button Height **// button.setWidth(100); //** Button Width **// button.addListener(new InputListener() { public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Pressed"); return true; } public void touchUp(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Released"); } }); stage.addActor(button); } @Override public void render(float delta) { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(); batch.setProjectionMatrix(camera.combined); batch.begin(); stage.draw(); batch.end(); }

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Animation Trouble with Java Swing Timer - Also, JFrame Will Not Exit_On_Close

    - by forgotton_semicolon
    So, I am using a Java Swing Timer because putting the animation code in a run() method of a Thread subclass caused an insane amount of flickering that is really a terrible experience for any video game player. Can anyone give me any tips on: Why there is no animation... Why the JFrame will not close when it is coded to Exit_On_Close 2 times My code is here: import java.awt.; import java.awt.event.; import javax.swing.*; import java.net.URL; //////////////////////////////////////////////////////////////// TFQ public class TFQ extends JFrame { DrawingsInSpace dis; //========================================================== constructor public TFQ() { dis = new DrawingsInSpace(); JPanel content = new JPanel(); content.setLayout(new FlowLayout()); this.setContentPane(dis); this.setDefaultCloseOperation(EXIT_ON_CLOSE); this.setTitle("Plasma_Orbs_Off_Orion"); this.setSize(500,500); this.pack(); //... Create timer which calls action listener every second.. // Use full package qualification for javax.swing.Timer // to avoid potential conflicts with java.util.Timer. javax.swing.Timer t = new javax.swing.Timer(500, new TimePhaseListener()); t.start(); } /////////////////////////////////////////////// inner class Listener thing class TimePhaseListener implements ActionListener, KeyListener { // counter int total; // loop control boolean Its_a_go = true; //position of our matrix int tf = -400; //sprite directions int Sprite_Direction; final int RIGHT = 1; final int LEFT = 2; //for obstacle Rectangle mega_obstacle = new Rectangle(200, 0, 20, HEIGHT); public void actionPerformed(ActionEvent e) { //... Whenever this is called, repaint the screen dis.repaint(); addKeyListener(this); while (Its_a_go) { try { dis.repaint(); if(Sprite_Direction == RIGHT) { dis.matrix.x += 2; } // end if i think if(Sprite_Direction == LEFT) { dis.matrix.x -= 2; } } catch(Exception ex) { System.out.println(ex); } } // end while i think } // end actionPerformed @Override public void keyPressed(KeyEvent arg0) { // TODO Auto-generated method stub } @Override public void keyReleased(KeyEvent arg0) { // TODO Auto-generated method stub } @Override public void keyTyped(KeyEvent event) { // TODO Auto-generated method stub if (event.getKeyChar()=='f'){ Sprite_Direction = RIGHT; System.out.println("matrix should be animating now "); System.out.println("current matrix position = " + dis.matrix.x); } if (event.getKeyChar()=='d') { Sprite_Direction = LEFT; System.out.println("matrix should be going in reverse"); System.out.println("current matrix position = " + dis.matrix.x); } } } //================================================================= main public static void main(String[] args) { JFrame SafetyPins = new TFQ(); SafetyPins.setVisible(true); SafetyPins.setSize(500,500); SafetyPins.setResizable(true); SafetyPins.setLocationRelativeTo(null); SafetyPins.setDefaultCloseOperation(EXIT_ON_CLOSE); } } class DrawingsInSpace extends JPanel { URL url1_plasma_orbs; URL url2_matrix; Image img1_plasma_orbs; Image img2_matrix; // for the plasma_orbs Rectangle bbb = new Rectangle(0,0, 0, 0); // for the matrix Rectangle matrix = new Rectangle(-400, 60, 430, 200); public DrawingsInSpace() { //load URLs try { url1_plasma_orbs = this.getClass().getResource("plasma_orbs.png"); url2_matrix = this.getClass().getResource("matrix.png"); } catch(Exception e) { System.out.println(e); } // attach the URLs to the images img1_plasma_orbs = Toolkit.getDefaultToolkit().getImage(url1_plasma_orbs); img2_matrix = Toolkit.getDefaultToolkit().getImage(url2_matrix); } public void paintComponent(Graphics g) { super.paintComponent(g); // draw the plasma_orbs g.drawImage(img1_plasma_orbs, bbb.x, bbb.y,this); //draw the matrix g.drawImage(img2_matrix, matrix.x, matrix.y, this); } } // end class enter code here

    Read the article

  • New Oracle BI Applications released

    - by THE
    Oracle has just released two new Applications for Oracle Business Intelligence Analytics with the Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} 7.9.6.x Extension Pack: Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Manufacturing Analytics, part of the Oracle BI Applications product family, helps discrete and process manufacturing organizations optimize their supply networks by integrating data from across the enterprise value chain, thereby enabling executives, operations managers, cost accountants and production supervisors to make informed and actionable decisions related to manufacturing execution. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Enterprise Asset Management Analytics, part of the Oracle BI Applications product family, offers complete and enhanced visibility to enterprise-wide maintenance information. Pre-built reports covering Maintenance History, Maintenance Cost Analysis and Maintenance Work Orders, provide Maintenance Managers information to maximize performance, identify potential issues much in advance, and address them before they escalate into serious problems.  More Information about the existing Business Intelligence Analytics Applications can be found on this page: http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html If you are not familiar with Oracle Manufacturing or Oracle Enterprise Asset Management, these PDFs might get you started: http://www.oracle.com/us/products/applications/060289.pdf http://www.oracle.com/us/products/applications/057127.pdf

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • Couldn't drop privileges: User is missing UID (see mail_uid setting)

    - by drecute
    I'm hoping I can use some help. I'm configuring dovecot_ldap, but I can't seem to be able to get dovecot to authenticate the ldap user. Below is my config and log info: hosts = 192.168.128.45:3268 dn = cn=Administrator,cn=Users,dc=company,dc=example,dc=com dnpass = "passwd" auth_bind = yes ldap_version = 3 base = dc=company, dc=example, dc=com user_attrs = sAMAccountName=home=/var/vmail/example.com/%$,uid=1001,gid=1001 user_filter = (&(sAMAccountName=%Ln)) pass_filter = (&(ObjectClass=person)(sAMAccountName=%u)) dovecot.conf # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-33-generic x86_64 Ubuntu 12.04 LTS auth_mechanisms = plain login auth_realms = example.com auth_verbose = yes disable_plaintext_auth = no mail_access_groups = mail mail_location = mbox:~/mail:INBOX=/var/mail/%u mail_privileged_group = mail passdb { driver = pam } passdb { driver = passwd } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } passdb { args = scheme=CRYPT username_format=%u /etc/dovecot/users driver = passwd-file } protocols = " imap pop3" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } } service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 ssl = yes } } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } userdb { args = username_format=%u /etc/dovecot/users driver = passwd-file } protocol imap { imap_client_workarounds = tb-extra-mailbox-sep imap_logout_format = bytes=%i/%o mail_plugins = } mail.log Nov 29 10:51:44 mail dovecot: auth-worker: pam(charyorde,10.10.1.28): pam_authenticate() failed: Authentication failure (password mismatch?) Nov 29 10:51:44 mail dovecot: auth-worker: passwd(charyorde,10.10.1.28): unknown user Nov 29 10:51:44 mail dovecot: auth: passwd(charyorde,10.10.1.28): unknown user Nov 29 10:51:44 mail dovecot: imap-login: Login: user=<charyorde>, method=PLAIN, rip=10.10.1.28, lip=10.10.1.30, mpid=1892, TLS Nov 29 10:51:44 mail dovecot: imap(charyorde): Error: user charyorde: Couldn't drop privileges: User is missing UID (see mail_uid setting) Nov 29 10:51:44 mail dovecot: imap(charyorde): Error: Internal error occurred. Refer to server log for more information. Nov 29 10:51:46 mail dovecot: auth-worker: pam(charyorde,10.10.1.28): pam_authenticate() failed: Authentication failure (password mismatch?) Nov 29 10:51:46 mail dovecot: auth-worker: passwd(charyorde,10.10.1.28): unknown user Nov 29 10:51:46 mail dovecot: auth: passwd(charyorde,10.10.1.28): unknown user Nov 29 10:51:46 mail dovecot: imap-login: Login: user=<charyorde>, method=PLAIN, rip=10.10.1.28, lip=10.10.1.30, mpid=1894, TLS Nov 29 10:51:46 mail dovecot: imap(charyorde): Error: user charyorde: Couldn't drop privileges: User is missing UID (see mail_uid setting) Nov 29 10:51:46 mail dovecot: imap(charyorde): Error: Internal error occurred. Refer to server log for more information. Nov 29 10:51:48 mail dovecot: auth-worker: pam([email protected],10.10.1.28): pam_authenticate() failed: Authentication failure (password mismatch?) Nov 29 10:51:48 mail dovecot: auth-worker: passwd([email protected],10.10.1.28): unknown user Nov 29 10:51:48 mail dovecot: auth: ldap([email protected],10.10.1.28): unknown user Nov 29 10:51:48 mail dovecot: auth: passwd-file([email protected],10.10.1.28): unknown user Nov 29 10:51:54 mail postfix/smtpd[1880]: idle timeout -- exiting Nov 29 10:51:54 mail postfix/smtpd[1879]: idle timeout -- exiting Nov 29 10:51:54 mail postfix/smtpd[1886]: proxymap stream disconnect Nov 29 10:51:54 mail postfix/smtpd[1887]: proxymap stream disconnect Nov 29 10:51:54 mail postfix/smtpd[1886]: auto_clnt_close: disconnect private/tlsmgr stream Nov 29 10:51:54 mail postfix/smtpd[1887]: auto_clnt_close: disconnect private/tlsmgr stream Nov 29 10:51:54 mail postfix/smtpd[1887]: idle timeout -- exiting Nov 29 10:51:54 mail postfix/smtpd[1886]: idle timeout -- exiting Nov 29 10:51:56 mail dovecot: auth-worker: pam([email protected],10.10.1.28): pam_authenticate() failed: Authentication failure (password mismatch?) Nov 29 10:51:56 mail dovecot: auth-worker: passwd([email protected],10.10.1.28): unknown user Nov 29 10:51:56 mail dovecot: auth: ldap([email protected],10.10.1.28): unknown user Nov 29 10:51:56 mail dovecot: auth: passwd-file([email protected],10.10.1.28): unknown user Nov 29 10:52:04 mail dovecot: auth-worker: pam([email protected],10.10.1.28): pam_authenticate() failed: Authentication failure (password mismatch?) Nov 29 10:52:04 mail dovecot: auth-worker: passwd([email protected],10.10.1.28): unknown user Nov 29 10:52:04 mail dovecot: auth: ldap([email protected],10.10.1.28): unknown user Nov 29 10:52:04 mail dovecot: auth: passwd-file([email protected],10.10.1.28): unknown user Nov 29 10:52:06 mail dovecot: imap-login: Disconnected (auth failed, 3 attempts): user=<[email protected]>, method=PLAIN, rip=10.10.1.28, lip=10.10.1.30, TLS Thank you for looking into this.

    Read the article

  • Apache SSO through Kerberos using Machine Account

    - by watkipet
    I'm attempting to get Apache on Ubuntu 12.04 to authenticate users via Kerberos SSO to a Windows 2008 Active Directory server. Here are a few things that make my situation different: I don't have administrative access to the Windows Server (nor will I ever have access). I also cannot have any changes to the server made on my behalf. I've joined Ubuntu server to the Active Directory using PBIS open. Users can log into the Ubuntu server using their AD credentials. kinit also works fine for each user. Since I can't change AD (except for adding new machines and SPNs), I cannot add a service account for Apache on Ubuntu. Since I can't add I service account, I have to use the machine keytab (/etc/krb5.keytab), or at least use the machine password in another keytab. Right now I'm using the machine keytab and giving Apache readonly access (bad idea, I know). I've already added the SPN using net ads keytab add HTTP -U Since I'm using Ubuntu 12.04, the only encoding types that get added during "net ads keytab add" are arcfour-hmac, des-cbc-crc, and des-cbc-md5. PBIS adds the AES encoding types to the host and cifs principals when it joins the domain, but I have yet to get "net ads keytab add" to do this. ktpass and setspn are out of the question because of #1 above. I've configured (for Kerberos SSO) and tested both IE 8 Firefox. I'm using the following configuration in my Apache site config: <Location /secured> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms DOMAIN.COM Krb5KeyTab /etc/krb5.keytab KrbLocalUserMapping On require valid-user </Location> When Firefox tries to connect get the following in Apache's error.log (LogLevel debug): [Wed Oct 23 13:48:31 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 13:48:31 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(994): [client 192.168.0.2] Using HTTP/[email protected] as server principal for password verification [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(698): [client 192.168.0.2] Trying to get TGT for user [email protected] [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(609): [client 192.168.0.2] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(652): [client 192.168.0.2] krb5_rd_req() failed when verifying KDC [Wed Oct 23 13:48:37 2013] [error] [client 192.168.0.2] failed to verify krb5 credentials: Decrypt integrity check failed [Wed Oct 23 13:48:37 2013] [debug] src/mod_auth_kerb.c(1073): [client 192.168.0.2] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) [Wed Oct 23 13:48:37 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured When IE 8 tries to connect I get: [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 14:03:30 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1628): [client 192.168.0.2] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1240): [client 192.168.0.2] Acquiring creds for HTTP@apache_server [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1385): [client 192.168.0.2] Verifying client data using KRB5 GSS-API [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1401): [client 192.168.0.2] Client didn't delegate us their credential [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1420): [client 192.168.0.2] GSS-API token of length 9 bytes will be sent back [Wed Oct 23 14:03:30 2013] [debug] src/mod_auth_kerb.c(1101): [client 192.168.0.2] GSS-API major_status:000d0000, minor_status:000186a5 [Wed Oct 23 14:03:30 2013] [error] [client 192.168.0.2] gss_accept_sec_context() failed: Unspecified GSS failure. Minor code may provide more information (, ) [Wed Oct 23 14:03:30 2013] [debug] mod_deflate.c(615): [client 192.168.0.2] Zlib: Compressed 477 to 322 : URL /secured Let me know if you'd like additional log and config files--the initial question is getting long enough.

    Read the article

  • nginx proxying websockets, must be missing something

    - by CodeMonkey
    I have a basic chat app written in node.js using express and socket.io; it works fine when connecting directly to node on port 3000 But doesn't work when I try to use nginx v1.4.2 as a proxy. I start off using the connection map map $http_upgrade $connection_upgrade { default upgrade; '' close; } Then add the locations location /socket.io/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; proxy_cache off; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; } location /web-service/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; rewrite /web-service/(.*) /$1 break; proxy_cache off; } These are built up using all of the tips to get it working that I could find. The error log does not show any errors. (except when I stop node to test the error logging is working) When through nginx I do see a websocket connection in the dev tools, with the status of 101; but the frames tab under the resuects is empty. The only differnece I can see in the response headers is a case difference - "upgrade" vs "Upgrade" - through nginx : Connection:upgrade Date:Fri, 08 Nov 2013 11:49:25 GMT Sec-WebSocket-Accept:LGB+iEBb8Ql9zYfqNfuuXzdzjgg= Server:nginx/1.4.2 Upgrade:websocket direct from node Connection:Upgrade Sec-WebSocket-Accept:8nwPpvg+4wKMOyQBEvxWXutd8YY= Upgrade:websocket output from node (when used through nginx) debug - served static content /socket.io.js debug - client authorized info - handshake authorized iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/websocket/iaej2VQlsbLFIhachyb1 debug - set heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - client authorized for debug - websocket writing 1:: debug - websocket writing 5:::{"name":"message","args":[{"message":"welcome to the chat"}]} debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client 7My3F4CuvZC0I4Olhybz debug - jsonppolling closed due to exceeded duration debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client AkCYl0nWNZAHeyUihyb0 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/xhr-polling/iaej2VQlsbLFIhachyb1?t=1383911206158 debug - setting poll timeout debug - discarding transport debug - cleared heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911216160&i=0 debug - setting poll timeout debug - discarding transport debug - clearing poll timeout debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client iaej2VQlsbLFIhachyb1 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911236429&i=0 debug - setting poll timeout debug - discarding transport debug - cleared close timeout for client iaej2VQlsbLFIhachyb1 when direct to node, the client does not start polling. The normal http stuff node outputs works fine with nginx. Clearly something I am not seeing, but I am stuck, thanks :)

    Read the article

  • Websphere federated repository for Active Directory

    - by Drakiula
    Hi, What I am trying to achieve is to have Websphere 6.1 use Active Directory users authentication. Websphere is running on Windows 2008 R2. What I've done already: Succesfully setup a federated repository for Windows Active Directory (LDAP); Create a realm definition for the federated repository previously defined; Set the realm definition as the current real definition. Stop the Websphere service. When I attempt to start the Websphere service again, it crashes with the following stacktrace: ------Start of DE processing------ = [9/3/10 2:36:14:133 PDT] , key = com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.registry.UserRegistryImpl.createCredential 824 Exception = com.ibm.websphere.security.EntryNotFoundException Source = com.ibm.ws.security.registry.UserRegistryImpl.createCredential probeid = 824 Stack Dump = com.ibm.websphere.wim.exception.EntityNotFoundException: CWWIM4001E The 'null' entity was not found. at com.ibm.ws.wim.registry.util.UniqueIdBridge.getUniqueUserId(UniqueIdBridge.java:233) at com.ibm.ws.wim.registry.WIMUserRegistry$6.run(WIMUserRegistry.java:351) at com.ibm.ws.wim.security.authz.jacc.JACCSecurityManager.runAsSuperUser(JACCSecurityManager.java:500) at com.ibm.ws.wim.security.authz.ProfileSecurityManager.runAsSuperUser(ProfileSecurityManager.java:964) at com.ibm.ws.wim.registry.WIMUserRegistry.getUniqueUserId(WIMUserRegistry.java:340) at com.ibm.ws.security.registry.UserRegistryImpl.createCredential(UserRegistryImpl.java:750) at com.ibm.ws.security.ltpa.LTPAServerObject.authenticate(LTPAServerObject.java:776) at com.ibm.ws.security.server.lm.ltpaLoginModule.login(ltpaLoginModule.java:453) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:795) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:209) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:709) at java.security.AccessController.doPrivileged(AccessController.java:246) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:706) at javax.security.auth.login.LoginContext.login(LoginContext.java:603) at com.ibm.ws.security.auth.JaasLoginHelper.jaas_login(JaasLoginHelper.java:376) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3513) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3306) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3086) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:2180) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:1972) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2530) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2560) at com.ibm.ws.security.core.SecurityContext.enable(SecurityContext.java:83) at com.ibm.ws.security.core.distSecurityComponentImpl.initialize(distSecurityComponentImpl.java:379) at com.ibm.ws.security.core.distSecurityComponentImpl.startSecurity(distSecurityComponentImpl.java:336) at com.ibm.ws.security.core.SecurityComponentImpl.startSecurity(SecurityComponentImpl.java:105) at com.ibm.ws.security.core.ServerSecurityComponentImpl.start(ServerSecurityComponentImpl.java:283) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ApplicationServerImpl.start(ApplicationServerImpl.java:197) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:526) at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:192) at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:140) at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:461) at com.ibm.ws.runtime.WsServer.main(WsServer.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183) at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90) at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72) at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at org.eclipse.core.launcher.Main.invokeFramework(Main.java:336) at org.eclipse.core.launcher.Main.basicRun(Main.java:280) at org.eclipse.core.launcher.Main.run(Main.java:977) at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:329) at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:92) Dump of callerThis = Object type = com.ibm.ws.security.registry.UserRegistryImpl com.ibm.ws.security.registry.UserRegistryImpl@68a068a0 Anybody maybe has a hint on this? I followed the exact steps described in the IBM Infocenter for setting this up. Thanks in advance for the help.

    Read the article

  • Error when installing Lync Server, "Installing OcsCore.msi(Feature_LocalMgmtStore)...failure code 1603"

    - by Trikks
    Im battling to install Lync Server in a test environment and are at the "Install Local Configuration Store" step. The prerequisites seems alright but bombs when installing the OcsCore.msi ... Checking prerequisite SqlNativeClient...prerequisite satisfied. Checking prerequisite SqlBackcompat...prerequisite satisfied. Checking prerequisite UcmaRedist...prerequisite satisfied. Installing OcsCore.msi(Feature_LocalMgmtStore)...failure code 1603 Error returned while installing OcsCore.msi(Feature_LocalMgmtStore), code 1603. Please consult log at C:\Users\Administrator.HAWC\AppData\Local\Temp\1\Add-OcsCore.msi-Feature_LocalMgmtStore-[2012_07_08][12_00_27].log The logfile doesn't really help me either, this is the end of it Property(S): Privileged = 1 Property(S): USERNAME = Windows User Property(S): DATABASE = C:\Windows\Installer\9525f.msi Property(S): OriginalDatabase = C:\ProgramData\Microsoft\Lync Server\Deployment\cache\4.0.7577.0\setup\OcsCore.msi Property(S): UILevel = 2 Property(S): Preselected = 1 Property(S): ACTION = INSTALL Property(S): WIX_ACCOUNT_LOCALSYSTEM = NT AUTHORITY\SYSTEM Property(S): WIX_ACCOUNT_LOCALSERVICE = NT AUTHORITY\LOCAL SERVICE Property(S): WIX_ACCOUNT_NETWORKSERVICE = NT AUTHORITY\NETWORK SERVICE Property(S): WIX_ACCOUNT_ADMINISTRATORS = BUILTIN\Administrators Property(S): WIX_ACCOUNT_USERS = BUILTIN\Users Property(S): WIX_ACCOUNT_GUESTS = BUILTIN\Guests Property(S): ROOTDRIVE = C:\ Property(S): CostingComplete = 1 Property(S): OutOfDiskSpace = 0 Property(S): OutOfNoRbDiskSpace = 0 Property(S): PrimaryVolumeSpaceAvailable = 0 Property(S): PrimaryVolumeSpaceRequired = 0 Property(S): PrimaryVolumeSpaceRemaining = 0 Property(S): INSTALLLEVEL = 1 Property(S): SOURCEDIR = C:\ProgramData\Microsoft\Lync Server\Deployment\cache\4.0.7577.0\setup\ Property(S): SourcedirProduct = {9521B708-9D80-46A3-9E58-A74ACF4E343E} === Logging stopped: 2012-07-08 12:01:46 === MSI (s) (98:F8) [12:01:46:354]: Note: 1: 1729 MSI (s) (98:F8) [12:01:46:354]: Product: Microsoft Lync Server 2010, Core Components -- Configuration failed. MSI (s) (98:F8) [12:01:46:354]: Windows Installer reconfigured the product. Product Name: Microsoft Lync Server 2010, Core Components. Product Version: 4.0.7577.0. Product Language: 1033. Manufacturer: Microsoft Corporation. Reconfiguration success or error status: 1603. MSI (s) (98:F8) [12:01:46:356]: Deferring clean up of packages/files, if any exist MSI (s) (98:F8) [12:01:46:356]: MainEngineThread is returning 1603 MSI (s) (98:84) [12:01:46:362]: RESTART MANAGER: Session closed. MSI (s) (98:84) [12:01:46:362]: No System Restore sequence number for this installation. MSI (s) (98:84) [12:01:46:363]: User policy value 'DisableRollback' is 0 MSI (s) (98:84) [12:01:46:363]: Machine policy value 'DisableRollback' is 0 MSI (s) (98:84) [12:01:46:363]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (98:84) [12:01:46:364]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (98:84) [12:01:46:364]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (98:84) [12:01:46:364]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (s) (98:84) [12:01:46:364]: Restoring environment variables MSI (s) (98:84) [12:01:46:373]: Destroying RemoteAPI object. MSI (s) (98:D4) [12:01:46:373]: Custom Action Manager thread ending. MSI (c) (20:64) [12:01:46:379]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (20:64) [12:01:46:380]: MainEngineThread is returning 1603 === Verbose logging stopped: 2012-07-08 12:01:46 === Any advice where to start in this? Thanks

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • Custom fail2ban Filter

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Cisco ASA (Client VPN) to LAN - through second VPN to second LAN

    - by user50855
    We have 2 site that is linked by an IPSEC VPN to remote Cisco ASAs: Site 1 1.5Mb T1 Connection Cisco(1) 2841 Site 2 1.5Mb T1 Connection Cisco 2841 In addition: Site 1 has a 2nd WAN 3Mb bonded T1 Connection Cisco 5510 that connects to same LAN as Cisco(1) 2841. Basically, Remote Access (VPN) users connecting through Cisco ASA 5510 needs access to a service at the end of Site 2. This is due to the way the service is sold - Cisco 2841 routers are not under our management and it is setup to allow connection from local LAN VLAN 1 IP address 10.20.0.0/24. My idea is to have all traffic from Remote Users through Cisco ASA destined for Site 2 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 2 has come via Site 1. I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible? Secondly, can anyone help me to correct the configuration bellow or point me in the direction of an example of such a configuration? Many Thanks. interface Ethernet0/0 nameif outside security-level 0 ip address 7.7.7.19 255.255.255.240 interface Ethernet0/1 nameif inside security-level 100 ip address 10.20.0.249 255.255.255.0 object-group network group-inside-vpnclient description All inside networks accessible to vpn clients network-object 10.20.0.0 255.255.255.0 network-object 10.20.1.0 255.255.255.0 object-group network group-adp-network description ADP IP Address or network accessible to vpn clients network-object 207.207.207.173 255.255.255.255 access-list outside_access_in extended permit icmp any any echo-reply access-list outside_access_in extended permit icmp any any source-quench access-list outside_access_in extended permit icmp any any unreachable access-list outside_access_in extended permit icmp any any time-exceeded access-list outside_access_in extended permit tcp any host 7.7.7.20 eq smtp access-list outside_access_in extended permit tcp any host 7.7.7.20 eq https access-list outside_access_in extended permit tcp any host 7.7.7.20 eq pop3 access-list outside_access_in extended permit tcp any host 7.7.7.20 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq https access-list outside_access_in extended permit tcp any host 7.7.7.21 eq 5721 access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient any access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient object-group group-adp-network access-list acl-vpnclient extended permit ip object-group group-adp-network object-group group-inside-vpnclient access-list PinesFLVPNTunnel_splitTunnelAcl standard permit 10.20.0.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 10.20.1.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 host 207.207.207.173 access-list inside_nat0_outbound_1 extended permit ip 10.20.1.0 255.255.255.0 host 207.207.207.173 ip local pool VPNPool 10.20.1.100-10.20.1.200 mask 255.255.255.0 route outside 0.0.0.0 0.0.0.0 7.7.7.17 1 route inside 207.207.207.173 255.255.255.255 10.20.0.3 1 crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto dynamic-map outside_dyn_map 20 set security-association lifetime seconds 288000 crypto dynamic-map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set reverse-route crypto map outside_map 20 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto map outside_dyn_map 20 match address acl-vpnclient crypto map outside_dyn_map 20 set security-association lifetime seconds 28800 crypto map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 20 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 group-policy YeahRightflVPNTunnel internal group-policy YeahRightflVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 vpn-tunnel-protocol IPSec password-storage disable pfs disable split-tunnel-policy tunnelspecified split-tunnel-network-list value acl-vpnclient default-domain value YeahRight.com group-policy YeahRightFLVPNTunnel internal group-policy YeahRightFLVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 10.20.0.7 vpn-tunnel-protocol IPSec split-tunnel-policy tunnelspecified split-tunnel-network-list value YeahRightFLVPNTunnel_splitTunnelAcl default-domain value yeahright.com tunnel-group YeahRightFLVPN type remote-access tunnel-group YeahRightFLVPN general-attributes address-pool VPNPool tunnel-group YeahRightFLVPNTunnel type remote-access tunnel-group YeahRightFLVPNTunnel general-attributes address-pool VPNPool authentication-server-group WinRadius default-group-policy YeahRightFLVPNTunnel tunnel-group YeahRightFLVPNTunnel ipsec-attributes pre-shared-key *

    Read the article

  • Useful Command-line Commands on Windows

    - by Sung Meister
    The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e, . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client

    Read the article

  • Windows 2008 R2 SMB / CIFS Logging to diagnose Brother MFC Network Scanning

    - by Steven Potter
    I am attempting to setup network scanning on a brother MFC-9970CDW printer. According to the Brother documentation, the printer is setup to connect to any CIFS network share. I applied all of the appropriate setting in the printer however I get a "sending error" when I try to scan a document. When I look at the logs of the 2008 R2 server that I am attempting to connect to; I can see in the security log where the printer successfully authenticates, however nothing else is logged. I would assume that immediately after the authentication, the printer is making a CIFS request and some sort of error is occurring, however I can't seem to find any way to log this information to find out what is going on. Is it possible to get Windows 2008 to log SMB/CIFS traffic? Followup: I installed Microsoft netmon and captured the packets associated with the transaction: 510 3:04:28 PM 7/9/2012 34.4277743 System 192.168.1.134 192.168.1.10 SMB SMB:C; Negotiate, Dialect = NT LM 0.12 {SMBOverTCP:30, TCP:29, IPv4:22} 511 3:04:28 PM 7/9/2012 34.4281246 System 192.168.1.10 192.168.1.134 SMB SMB:R; Negotiate, Dialect is NT LM 0.12 (#0), SpnegoToken (1.3.6.1.5.5.2) {SMBOverTCP:30, TCP:29, IPv4:22} 519 3:04:29 PM 7/9/2012 34.8986214 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM NEGOTIATE MESSAGE {SMBOverTCP:30, TCP:29, IPv4:22} 520 3:04:29 PM 7/9/2012 34.8989310 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx, NTLM CHALLENGE MESSAGE - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED {SMBOverTCP:30, TCP:29, IPv4:22} 522 3:04:29 PM 7/9/2012 34.9022870 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP, User: PRINTSUPOFF, Workstation: BRN001BA9AD1FE6 {SMBOverTCP:30, TCP:29, IPv4:22} 523 3:04:29 PM 7/9/2012 34.9032421 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx {SMBOverTCP:30, TCP:29, IPv4:22} 525 3:04:29 PM 7/9/2012 34.9051855 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Connect Andx, Path = \\192.168.1.10\IPC$, Service = ????? {SMBOverTCP:30, TCP:29, IPv4:22} 526 3:04:29 PM 7/9/2012 34.9053083 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Connect Andx, Service = IPC {SMBOverTCP:30, TCP:29, IPv4:22} 528 3:04:29 PM 7/9/2012 34.9073573 System 192.168.1.134 192.168.1.10 DFSC DFSC:Get DFS Referral Request, FileName: \\192.168.1.10\NSCFILES, MaxReferralLevel: 3 {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 529 3:04:29 PM 7/9/2012 34.9152042 System 192.168.1.10 192.168.1.134 SMB SMB:R; Transact2, Get Dfs Referral - NT Status: System - Error, Code = (549) STATUS_NOT_FOUND {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 531 3:04:29 PM 7/9/2012 34.9169738 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} 532 3:04:29 PM 7/9/2012 34.9170688 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} As you can see, the DFS referral fails and the transaction is shut down. I can't see any reason for the DFS referral to fail. The only reference I can find online is: https://bugzilla.samba.org/show_bug.cgi?id=8003 Anyone have any ideas for a solution?

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • collectd does not work

    - by bery
    I have installed collectd-5.0.0 on Fedora12 server and would like to run its service for receiving data from clients. I have enabled network plugin and rddtool plugin as commented: collectd.conf in server: BaseDir "/opt/collectd/var/lib/collectd" LoadPlugin "logfile" LoadPlugin network LoadPlugin rrdtool <Plugin network> Listen "192.168.8.37" "25826" </Plugin> collectd.conf in client: LoadPlugin logfile LoadPlugin cpu LoadPlugin network LoadPlugin memory <Plugin network> Server"192.168.8.37" "25826" </Plugin> collectd.log in server: [2011-08-03 02:36:04] Exiting normally. [2011-08-03 02:36:04] rrdtool plugin: Shutting down the queue thread. [2011-08-03 02:36:04] network plugin: Stopping receive thread. [2011-08-03 02:36:04] network plugin: Stopping dispatch thread. [2011-08-03 02:37:11] Initialization complete, entering read-loop. collectd.log in client: [2011-08-02 17:31:44] Initialization complete, entering read-loop. results thst execute netstat on server: netstat -ulpn | grep 25826 udp 0 0 192.168.8.37:25826 0.0.0.0:* 4744/collectd problem: but there is noting in "/opt/collectd/var/lib/collectd/" on ser yes,I move the port number of "25826" as your propose(But I think this is the default port for coolectd).there is no rdd files recived on server. collectd.log in client collectd [2011-08-03 10:01:36] plugin_read_thread: Handling memory'. [2011-08-03 10:01:36] plugin_read_thread: Handlingcpu'. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = used; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = user; [2011-08-03 10:01:36] uc_update: uml/memory/memory-used: ds[0] = 280412160.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-user: ds[0] = 0.100008 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = buffered; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = nice; [2011-08-03 10:01:36] uc_update: uml/memory/memory-buffered: ds[0] = 344182784.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-nice: ds[0] = 0.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] network plugin: flush_buffer: send_buffer_fill = 1340 [2011-08-03 10:01:36] network plugin: network_send_buffer: buffer_len = 1340 ... [2011-08-03 10:01:36] plugin_read_thread: Next read of the cpu plugin at 1312380106.429064774. collectd.log in server collectd: [2011-08-03 20:18:08] type = network [2011-08-03 20:18:08] type = rrdtool [2011-08-03 20:18:08] network plugin: sockent_open: node = 192.168.8.37; service = 25826; [2011-08-03 20:18:08] fd = 3; calling bind' [2011-08-03 20:18:08] Done parsing/opt/collectd//share/collectd/types.db' [2011-08-03 20:18:08] interval_g = 10; [2011-08-03 20:18:08] timeout_g = 2; [2011-08-03 20:18:08] hostname_g = localhost.localdomain; [2011-08-03 20:18:08] Initialization complete, entering read-loop. It looks like, data is sending but doesn't be recived. Where is the mistake?

    Read the article

  • Nagios: NRPE: Unable to read output, Can't find the reason, can you?

    - by Itai Ganot
    I have a Nagios server and a monitored server. On the monitored server: [root@Monitored ~]# netstat -an |grep :5666 tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN [root@Monitored ~]# locate check_kvm /usr/lib64/nagios/plugins/check_kvm [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm -H localhost hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm NRPE: Unable to read output [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# ps -ef |grep nrpe nagios 21178 1 0 16:11 ? 00:00:00 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d [root@Monitored ~]# On the Nagios server: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 [root@Nagios ~]# When I check another server in the network using the same command it works: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running [root@Nagios ~]# Running the check locally using Nagios account: [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ Running the check remotely from the Nagios server using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 -bash-4.1$ Running the same check_kvm against a different server in the network using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running -bash-4.1$ Permissions: -rwxr-xr-x. 1 root root 4684 2013-10-14 17:14 nrpe.cfg (aka /etc/nagios/nrpe.cfg) drwxrwxr-x. 3 nagios nagios 4096 2013-10-15 03:38 plugins (aka /usr/lib64/nagios/plugins) /etc/sudoers: [root@Monitored ~]# grep -i requiretty /etc/sudoers #Defaults requiretty iptables/selinux: [root@Monitored xinetd.d]# service iptables status iptables: Firewall is not running. [root@Monitored xinetd.d]# service ip6tables status ip6tables: Firewall is not running. [root@Monitored xinetd.d]# grep disable /etc/selinux/config # disabled - No SELinux policy is loaded. SELINUX=disabled [root@Monitored xinetd.d]# The command in /etc/nagios/nrpe.cfg is: [root@Monitored ~]# grep kvm /etc/nagios/nrpe.cfg command[check_kvm]=sudo /usr/lib64/nagios/plugins/check_kvm and the nagios user is added on /etc/sudoers: nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_kvm nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_nrpe The check_kvm is a shell script, looks like that: #!/bin/sh LIST=$(virsh list --all | sed '1,2d' | sed '/^$/d'| awk '{print $2":"$3}') if [ ! "$LIST" ]; then EXITVAL=3 #Status 3 = UNKNOWN (orange) echo "Unknown guests" exit $EXITVAL fi OK=0 WARN=0 CRIT=0 NUM=0 for host in $(echo $LIST) do name=$(echo $host | awk -F: '{print $1}') state=$(echo $host | awk -F: '{print $2}') NUM=$(expr $NUM + 1) case "$state" in running|blocked) OK=$(expr $OK + 1) ;; paused) WARN=$(expr $WARN + 1) ;; shutdown|shut*|crashed) CRIT=$(expr $CRIT + 1) ;; *) CRIT=$(expr $CRIT + 1) ;; esac done if [ "$NUM" -eq "$OK" ]; then EXITVAL=0 #Status 0 = OK (green) fi if [ "$WARN" -gt 0 ]; then EXITVAL=1 #Status 1 = WARNING (yellow) fi if [ "$CRIT" -gt 0 ]; then EXITVAL=2 #Status 2 = CRITICAL (red) fi echo hosts:$NUM OK:$OK WARN:$WARN CRIT:$CRIT - $LIST exit $EXITVAL Edit (10/22/13): Following all that, I am now able to get some response from the script: [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 It seems like the problem is some how related to the check_nrpe command or something which is related to the nrpe installation on the server.

    Read the article

  • Custom fail2ban Filter for phpMyadmin bruteforce attempts

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >