Search Results

Search found 46104 results on 1845 pages for 'run dialog'.

Page 375/1845 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • How can I make Banshee re-encode FLAC to Ogg Vorbis when copying to my player?

    - by Michael E
    I have most of my music in FLAC on my large storage device, and would like to automatically re-encode it in Ogg Vorbis when copying it to my portable audio player (Sansa Fuze v2). I have set my Fuze to MTP mode and told Banshee to encode to Ogg Vorbis with quality 4 in the Device Properties dialog for the Fuze (I would use MSC mode, but don't have an encoding option in the device properties when I do that). However, when I copy music to the device, either by dragging it from the music library or by syncing a playlist, the full FLAC files are copied rather than transcoded and written as Oggs. How can I get my Banshee setup re-encoding the audio? If StackExchange supported bonus points, I'd give bonus points for a solution that only re-encoded music that was already losslessly encoded, but I don't think that's possible.

    Read the article

  • Scammers on the lose pretending to be Microsoft

    - by John Paul Cook
    Minutes ago I received a phone call that the caller ID listed as “Out of area”, which I knew was a bad sign. It was difficult to understand the caller because of his very thick accent. He told me that he was from Microsoft and that my computer was throwing a large number of errors and he was calling to help me. He directed me to use Windows R to open a run dialog box, type eventvwr and then look at the Event Viewer. Within Event Viewer, he instructed me to open Custom Views and then open Administrative...(read more)

    Read the article

  • Webmatrix fails to connect PHP website to MySQL

    - by Roni
    I downloaded the latest versions of Webmatrix and MySQL. I downloaded a PHP-MySQL Connector: http://dev.mysql.com/downloads/connector/php-mysqlnd/ In the "Databases" Workspace I pressed "New Connection" button and choose "MySQL Connection" In the Dialog box I filled-in all connection details -- It looks like the database was added. But then when I double-click on the database, I get a short error message saying it cannot connect. I tried everything, searched the web... I'm sure it's a very simple question, so please whoever can help I'll be grateful. I think best solution for me would be if someone could please just give me a link to download of: Webmatrix,MySQL,Connector; and instructions on how to install and then how to connect. This would be the safest way to help me.

    Read the article

  • connecting clients to server with emulator on different computers

    - by prolink007
    I am writing an application that communicates using sockets. I have a server running on one android emulator on a computer, then i have 2 other clients running on android emulators on 2 other computers. I am trying to get the 2 clients to connect to the server. This works when i run the server and clients on the same computer, but when i attempt to do this on the same wifi network and on separate computers it gives me the following error. The client and server code is posted below. A lot is stripped out just to show the important stuff. Also, after the server starts i telnet into the server and run these commands redir add tcp:5000:6000 (i have also tried without doing the redir but it still says the same thing). Then i start the clients and get the error. Thanks for the help! Both the 5000 port and 6000 port are open on my router. And i have windows firewall disabled on the computer hosting the server. 11-27 18:54:02.274: W/ActivityManager(60): Activity idle timeout for HistoryRecord{44cf0a30 school.cpe434.ClassAidClient/school.cpe434.ClassAid.ClassAidClient4Activity} 11-27 18:57:02.424: W/System.err(205): java.net.SocketException: The operation timed out 11-27 18:57:02.454: W/System.err(205): at org.apache.harmony.luni.platform.OSNetworkSystem.connectSocketImpl(Native Method) 11-27 18:57:02.454: W/System.err(205): at org.apache.harmony.luni.platform.OSNetworkSystem.connect(OSNetworkSystem.java:114) 11-27 18:57:02.465: W/System.err(205): at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:245) 11-27 18:57:02.465: W/System.err(205): at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:220) 11-27 18:57:02.465: W/System.err(205): at java.net.Socket.startupSocket(Socket.java:780) 11-27 18:57:02.465: W/System.err(205): at java.net.Socket.<init>(Socket.java:314) 11-27 18:57:02.465: W/System.err(205): at school.cpe434.ClassAid.ClassAidClient4Activity.onCreate(ClassAidClient4Activity.java:102) 11-27 18:57:02.474: W/System.err(205): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 11-27 18:57:02.474: W/System.err(205): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 11-27 18:57:02.474: W/System.err(205): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 11-27 18:57:02.474: W/System.err(205): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 11-27 18:57:02.474: W/System.err(205): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 11-27 18:57:02.474: W/System.err(205): at android.os.Handler.dispatchMessage(Handler.java:99) 11-27 18:57:02.474: W/System.err(205): at android.os.Looper.loop(Looper.java:123) 11-27 18:57:02.486: W/System.err(205): at android.app.ActivityThread.main(ActivityThread.java:4363) 11-27 18:57:02.486: W/System.err(205): at java.lang.reflect.Method.invokeNative(Native Method) 11-27 18:57:02.486: W/System.err(205): at java.lang.reflect.Method.invoke(Method.java:521) 11-27 18:57:02.486: W/System.err(205): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 11-27 18:57:02.486: W/System.err(205): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 11-27 18:57:02.486: W/System.err(205): at dalvik.system.NativeStart.main(Native Method) The server code public class ClassAidServer4Activity extends Activity { ServerSocket ss = null; String mClientMsg = ""; String mClientExtraMsg = ""; Thread myCommsThread = null; public static final int SERVERPORT = 6000; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = (TextView) findViewById(R.id.textView1); tv.setText("Nothing from client yet"); this.myCommsThread = new Thread(new CommsThread()); this.myCommsThread.start(); } class CommsThread implements Runnable { public void run() { // Socket s = null; try { ss = new ServerSocket(SERVERPORT ); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } while(true) { try { Socket socket = ss.accept(); connectedDeviceCount++; Thread lThread = new Thread(new ListeningThread(socket)); lThread.start(); } catch (IOException e) { e.printStackTrace(); } } } } class ListeningThread implements Runnable { private Socket s = null; public ListeningThread(Socket socket) { // TODO Auto-generated constructor stub this.s = socket; } @Override public void run() { // TODO Auto-generated method stub while (!Thread.currentThread().isInterrupted()) { Message m = new Message(); // m.what = QUESTION_ID; try { if (s == null) s = ss.accept(); BufferedReader input = new BufferedReader( new InputStreamReader(s.getInputStream())); String st = null; st = input.readLine(); String[] temp = parseReadMessage(st); mClientMsg = temp[1]; if(temp.length > 2) { mClientExtraMsg = temp[2]; } m.what = Integer.parseInt(temp[0]); myUpdateHandler.sendMessage(m); } catch (IOException e) { e.printStackTrace(); } } } } } The client code public class ClassAidClient4Activity extends Activity { //telnet localhost 5554 //redir add tcp:5000:6000 private Socket socket; private String serverIpAddress = "192.168.1.102"; private static final int REDIRECTED_SERVERPORT = 5000; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { InetAddress serverAddr = InetAddress.getByName(serverIpAddress); socket = new Socket(serverAddr, REDIRECTED_SERVERPORT); } catch (UnknownHostException e1) { mQuestionAdapter.add("UnknownHostException"); e1.printStackTrace(); } catch (IOException e1) { mQuestionAdapter.add("IOException"); e1.printStackTrace(); } } }

    Read the article

  • Google Chrome wont start after changing hostname

    - by user254473
    I tried to start google chrome in terminal several times, and I keep receiving the following message: ... :ERROR:process_singleton_linux.cc(309)] The profile appears to be in use by another Google Chrome process (8629) on another computer ("previous name of the computer"). Chrome has locked the profile so that it doesn't get corrupted. If you are sure no other processes are using this profile, you can unlock the profile and relaunch Chrome. ... :ERROR:simple_message_box_views.cc(208)] Unable to show a dialog outside the UI thread message loop: Google Chrome - The profile appears to be in use by another Google Chrome process (8629) on another computer ("previous name of the computer"). Chrome has locked the profile so that it doesn't get corrupted. If you are sure no other processes are using this profile, you can unlock the profile and relaunch Chrome. Any suggestions? Thanks in advance.

    Read the article

  • Startup value for Win7

    - by Mike
    Problem at a glance: For Win 7, Ubuntu One changes the startup value to enabled whenever I run it More Details: If I change the startup value in: Control Panel System and Security Administrative Tools System Configuration Startup to disabled for Ubuntu One (since I don't want it to start when I log on), it succesfully stops Ubuntu One from starting at logon However, if I start the Ubuntu One manually, it changes the above startup value to enabled. Is there a way to prevent Ubuntu One from changing its startup value? I want it to stay disabled. I haven't found anything relevant in Ubuntu One's settings dialog.

    Read the article

  • ComboBox Data Binding

    - by Geertjan
    Let's create a databound combobox, levering MVC in a desktop application. The result will be a combobox, provided by the NetBeans ChoiceView, that displays data retrieved from a database: What follows is not much different from the NetBeans Platform CRUD Application Tutorial and you're advised to consult that document if anything that follows isn't clear enough. One kind of interesting thing about the instructions that follow is that it shows that you're able to create an application where each element of the MVC architecture can be located within a separate module: Start by creating a new NetBeans Platform application named "MyApplication". Model We're going to start by generating JPA entity classes from a database connection. In the New Project wizard, choose "Java Class Library". Click Next. Name the Java Class Library "MyEntities". Click Finish. Right-click the MyEntities project, choose New, and then select "Entity Classes from Database". Work through the wizard, selecting the tables of interest from your database, and naming the package "entities". Click Finish. Now a JPA entity is created for each of the selected tables. In the Project Properties dialog of the project, choose "Copy Dependent Libraries" in the Packaging panel. Build the project. In your project's "dist" folder (visible in the Files window), you'll now see a JAR, together with a "lib" folder that contains the JARs you'll need. In your NetBeans Platform application, create a module named "MyModel", with code name base "org.my.model". Right-click the project, choose Properties, and in the "Libraries" panel, click Add Dependency button in the Wrapped JARs subtab to add all the JARs from the previous step to the module. Also include "derby-client.jar" or the equivalent driver for your database connection to the module. Controler In your NetBeans Platform application, create a module named "MyControler", with code name base "org.my.controler". Right-click the module's Libraries node, in the Projects window, and add a dependency on "Explorer & Property Sheet API". In the MyControler module, create a class with this content: package org.my.controler; import org.openide.explorer.ExplorerManager; public class MyUtils { static ExplorerManager controler; public static ExplorerManager getControler() { if (controler == null) { controler = new ExplorerManager(); } return controler; } } View In your NetBeans Platform application, create a module named "MyView", with code name base "org.my.view".  Create a new Window Component, in "explorer" view, for example, let it open on startup, with class name prefix "MyView". Add dependencies on the Nodes API and on the Explorer & Property Sheet API. Also add dependencies on the "MyModel" module and the "MyControler" module. Before doing so, in the "MyModel" module, make the "entities" package and the "javax.persistence" packages public (in the Libraries panel of the Project Properties dialog) and make the one package that you have in the "MyControler" package public too. Define the top part of the MyViewTopComponent as follows: public final class MyViewTopComponent extends TopComponent implements ExplorerManager.Provider { ExplorerManager controler = MyUtils.getControler(); public MyViewTopComponent() { initComponents(); setName(Bundle.CTL_MyViewTopComponent()); setToolTipText(Bundle.HINT_MyViewTopComponent()); setLayout(new BoxLayout(this, BoxLayout.PAGE_AXIS)); controler.setRootContext(new AbstractNode(Children.create(new ChildFactory<Customer>() { @Override protected boolean createKeys(List list) { EntityManager entityManager = Persistence. createEntityManagerFactory("MyEntitiesPU").createEntityManager(); Query query = entityManager.createNamedQuery("Customer.findAll"); list.addAll(query.getResultList()); return true; } @Override protected Node createNodeForKey(Customer key) { Node customerNode = new AbstractNode(Children.LEAF, Lookups.singleton(key)); customerNode.setDisplayName(key.getName()); return customerNode; } }, true))); controler.addPropertyChangeListener(new PropertyChangeListener() { @Override public void propertyChange(PropertyChangeEvent evt) { Customer selectedCustomer = controler.getSelectedNodes()[0].getLookup().lookup(Customer.class); StatusDisplayer.getDefault().setStatusText(selectedCustomer.getName()); } }); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Customers: ")); row1.add(new ChoiceView()); add(row1); } @Override public ExplorerManager getExplorerManager() { return controler; } ... ... ... Now run the application and you'll see the same as the image with which this blog entry started.

    Read the article

  • AntClassLoader bug exposed by forgetful NetBeans

    - by vbkraemer
    Many users have run into ClassNotFoundExceptions and NoClassDefFoundErrors after working with web services that target GlassFish while developing their projects in NetBeans. The issue usually appears as a dialog similar to this This can be pretty debilitating. The bug appears to be in the AntClassLoader, which is tickled by the wsimport ant task that ships with GlassFish 3.1.2. The fix is pretty simple: Upgrade the Metro bits that ship in 3.1.2 with bits that have had a patch applied. There are detailed instruction about installing the updated Metro bits onto GlassFish. This upgrade is probably useful for any install of GlassFish 3.1, but it is critically important for folks that develop web services from inside NetBeans and deploy them onto GlassFish 3.1.2.

    Read the article

  • GUI won't load, but the desktop background loads. Can't access the terminal

    - by Mickeysofine
    Having trouble with the Ubuntu GUI. I can log in just fine (see this)... everything looks normal there. But as soon as I put in my password, I get this. I tried a few different troubleshooting keyboard commands (ctrl+alt+t, crtl+alt+delete), and only the latter worked. I can interact with that window just fine, except I am unable to resize or move it. The first time I logged in, I got a dialog box that said, "Ubuntu has experienced an internal error. Send error report?" Doesn't say anything now. Yes I tried restarting it. Thanks a bunch, Michael EDIT: Trying to start a guest session leads to the same problem.

    Read the article

  • javax.servlet.ServletException: WriteText method cannot write null text

    - by Learner
    I have created a Web application using JSF+Icefaces+Richfaces+Primefaces.It is working great while I run it from eclipse as a project but When I created its WAR file and deployed in GlassFish Server then while rendering a page it is throwing this exception javax.servlet.ServletException: WriteText method cannot write null text I searched but didn't get any good solution.A quick help is highly appreciated Edit:1 I think this would be the relevant part for this <li class="page_item" id="liMasterSearch"> <!-- this is for hide (<li class="page_item hide" id="liMasterSearch"> applied to every class) --> <h:commandLink value="Search" action="#{masterRenderBean.showSimpleSearch}"></h:commandLink> </li> <li class="page_item" id="liAdvanceSearch"> <h:commandLink value="Advance Search" action="#{masterRenderBean.showADVS}"></h:commandLink> </li> Here you can see two links (1) Search and (2) Advance Search when I click on Search , It shows search page (By rendering-Actually I have included all pages in masterpage and render them on commandlink functions) <h:panelGroup rendered="#{not masterRenderBean.simpleSearch}"> <ui:include src="../../WebPages/SearchPages/MasterSearch.xhtml"></ui:include> </h:panelGroup> But When I click on Advance Search link (on which this part should render) <h:panelGroup rendered="#{not masterRenderBean.advs}"> <ui:include src="../../WebPages/SearchPages/PersonalAdvanceSearch.xhtml"/> </h:panelGroup> The browser show the above exception. NOTE: Keep in mind that this problem is occurring in deploying.It is not coming in actual application when I run it from eclipse from code EDIT:2 I found in server logs that this exception is coming due to acefaces and this portion of code <ace:autoCompleteEntry id="txtplaceofbirth" rows="10" autocomplete="false" minChars="2" width="150" value="#{inputPersonal.selectedplcofBirth}" filterMatchMode="none" valueChangeListener="#{inputPersonal.valueChangeEventCity}"> <f:selectItems value="#{inputPersonal.cities}"/> </ace:autoCompleteEntry></h:outputFormat> is messing up.Any idea Why this is hapening? Edit #3: Here is the full tack trace of exception [#|2012-11-19T09:55:48.026+0500|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=53;_ThreadName=Thread-2;|java.lang.NullPointerException: WriteText method cannot write null text at org.icefaces.impl.context.DOMResponseWriter.writeText(DOMResponseWriter.java:314) at org.icefaces.impl.context.DOMResponseWriter.writeText(DOMResponseWriter.java:340) at com.sun.faces.renderkit.html_basic.OutputMessageRenderer.encodeEnd(OutputMessageRenderer.java:163) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:875) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1764) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at org.icefaces.impl.renderkit.RendererWrapper.encodeChildren(RendererWrapper.java:49) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeRecursive(HtmlBasicRenderer.java:304) at com.sun.faces.renderkit.html_basic.GroupRenderer.encodeChildren(GroupRenderer.java:105) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at org.icefaces.impl.renderkit.RendererWrapper.encodeChildren(RendererWrapper.java:49) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1760) at org.icefaces.impl.context.DOMPartialViewContext.processPartial(DOMPartialViewContext.java:142) at javax.faces.component.UIViewRoot.encodeChildren(UIViewRoot.java:981) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:391) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:131) at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:288) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:121) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:594) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1542) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:281) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:722) |#]

    Read the article

  • Ubuntu Software Center is crashing while trying to install psychonauts

    - by GonzoDark
    I am having a problem installing Tim Schafers epic platform video game Psychonauts trough the Ubuntu Software Center. I have bought the game on http://www.humblebundle.com/ and I have used the new redeem option introduced in this bundle: "redeem your bundle on the Ubuntu Software Center." When I have downloaded approximately 2.1 GB of Psychonauts then Ubuntu Software Center starts to repeatedly crash and pop-up with a new crash report dialog every few seconds (before previous ones are closed and that will crash the computer after a few min, unless I stop the download). I also have a file-size bug, where Ubuntu Software Center tells me that I have downloaded 880,2 MB out of 133,3 MB I use the new Ubuntu 12.04 LTS and Ubuntu Software Center version 5.2.2.2 (©2009-2011 Canonical <--- That is also a bug, should be 2012 I guess) I hope someone can help me.

    Read the article

  • Why does Evolution give me multiple reminders for the same event?

    - by Dylan McCall
    I have a bunch of calendars on my Google Calendar account (school & work, personal, etc.), so I have Evolution subscribed to each one of those calendars. When I get a notification for an upcoming (recurring) event, I actually get many notifications for the same event, all at the same time. For example, just now I got 21 notifications for one event. I am using Gnome Shell, where the message tray displays event notifications in its own way. I get the same behaviour on my other computer, also running Ubuntu 11.10 with Gnome Shell. (And it has happened running Unity where Evolution only shows its dialog window with event reminders, but I'm not sure if it has happened quite as intensely). Does someone know why this is happening, and maybe how to fix it? It would be cool to get slightly less panic-inducing event reminders :)

    Read the article

  • Adobe Reader (acroread) 64 bit looks bad

    - by andreas-1724
    I installed acroread on Ubuntu 14.04 64-bit with gdebi (also tried dpkg and via repository "deb http://archive.canonical.com/ precise partner"). The result looks bad. For example the open-dialog is out of place, the folder-icons are green and the buttons are not rounded. Actually when I start acroread from the command-line, I get several messages and warnings: Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Gtk-WARNING **: Unable to locate theme engine in module_path: "murrine" Gtk-Message: Failed to load module "canberra-gtk-module" I remember, that I had this problem, whenever I used a 64bit-Ubuntu (even Ubuntu 12.04), but not when I used a 32bit-Ubuntu.

    Read the article

  • Sneak Peek: Reports In Visual Studio 2010, WPF & Silverlight

    The DevExpress Reports team has been busy preparing for the DXperience v2010.1 release. Im happy to report that theyre hard work has paid off and the new items for v2010.1 look fantastic: Complete Support for Visual Studio 2010 The XtraReports Suite now fully supports the new Microsoft IDE providing the same feature set as for the previous Visual Studio versions: Create new reports using templates available through the Add New Item dialog Use fully-integrated report designer with native...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Upgrading Ubuntu to 12.04 stuck

    - by Chris Britt
    While upgrading from 11.10 to 12.04, I'm stalled at the step: Installing new version of config file /etc/init/udevtrigger.conf The Terminal in the Distribution Upgrade window is expanded, there is no prompt for user input. I've hit enter a few times but it doesn't change anything but add a blank line after the last print statement. It's been sitting for ~6 hours at this stage. No other windows or dialog boxes are open. The upgrade has progressed through "Preparing to Upgrade", "Setting new software channels", and "Getting new packages". Under the progress bar (which estimates 1 day 2 hours remaining, and has for some time), it says Preparing to configure libpcap0.8. I'd take a screen-shot, but it won't let me save the file anywhere. The progress bar is about 1/3 complete.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • No Sound via HDMI

    - by Goony Hill
    I have ubuntu installed on a Acer aspire revo 3700 intel atom processor with nvidia ion graphics this is plugged into a celcus 32 inch TV via HDMI (1080p). The video driver shows as an nvidia which I can select. I have set the sound to play via HDMI and the output to HDMI but get no sound. I have tried a sony 1080i TV with the box but get eratic results with the graphics, but the sound picks up straight away that is there is no need to select it. The graphics on the celcus TV work but I get a dialog box showing loads of different resolutions and frequencies which I have to close manually, these appear to be attempts to set different resolutions for the TV. Am I missing some sort of screen/sound driver, if so does anyone know what might support the celcus 32 inch (1080p) tv?

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • MonoDevelop 2.4 released

    Today we are releasing MonoDevelop 2.4, which includes plenty of new features and improvements. The release highlights are: Workbench usability improvements, which include a new Navigate to Symbol dialog, Solution and Class pad zooming, better pad layout, and many other look & feel improvements. Improvements in project management, with support for standalone assembly projects, and external console support for Windows and Mac. Extensive text editor improvements, including in-line search,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Greenfoot project is read-only

    - by AzharHafiz.com
    I received this message when starting greenfoot on ubuntu 11.10 I'm a newbie and not sure where does the file located (greenfoot) The project is read-only.How and where do I change the permission? You will not be able to create objects or execute methods. Either the access rights of the project directory are set as 'read only' for you, or the whole file system is not writable (is it a CD?). To fully use this project, you must ensure that it is on a writable file system (usually your hard disk), and that you have write permission in the project directory and each file within it. This can often be accomplished by choosing "save as" from the Project menu after closing this dialog.

    Read the article

  • SSMS Slow To Save

    - by Bunch
    When using SSMS 2008 I found it to be really slow when trying to save a file, even a small one with just some simple SELECT statements on it. The symptoms were the flashing save disk icons in the lower right corner and selecting the location to save in the Save File dialog would hang each time I picked a location (e.g. Libraries, a folder). This was the first I had seen anything like this where it was really, really slow. It ended up that a server had died last week and I was still mapping a drive to it. SSMS was still trying to connect to it causing the slow down (further information here and here). Once I removed that the save functionality worked like it should. Tags: SQL

    Read the article

  • Release Notes for 10/15/2012

    Below are the release notes from this week's deployment. Improvements and Bug Fixes Added the Action Bar to the project home and documentation tabs. CodePlex now 100% ASP.NET MVC (bye bye Web Forms). Updated publish project banner to improve the project setup experience. Fixed float behavior of action bar pop up dialog. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • Webcast Q&A: Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    This week we had the fifth webcast in our WebCenter in Action webcast series, "Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter", where customers Giovani Dacumos and Minh Ong from the Los Angeles Department of Building & Safety (LADBS), and Sheetal Paranjpye and Rajiv Desai from Oracle Partner 3Di, shared how Oracle WebCenter is powering LADBS' externally facing website and providing a superior self-service experience for their customers. We asked the speakers to provide some dialogue for Q&A.   Giovani Dacumos, Director of Systems and Minh Ong, LADBS Q: Did you run into any issues when integrating all of the different applications together?A: Yes. We did have issues integrating a secure sign on between the portal and other legacy applications. We used portlets and iframes to overcome those.  This is a new technology for us and we are also learning as we go so there were a lot of challenges in developing and implementing our vision. Q: What has been the biggest benefit your end users have seen?A: The biggest benefit for our ends users is ease-of-use. We've given them a system that provided a new and improved source of information, as well as a very organized flow of transaction processing. It has made our online service very user friendly. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: There was no internal resistance during the implementation, only challenges. As mentioned earlier, this is a new technology for us. We've come across issues that needed assistance from Oracle. Working with 3Di and Oracle has helped us tremendously to find solutions to our implementation issues. Q: Given the performance, what do you estimate to be the top end capacity of the system? A: With the current performance and architecture we have, we are able to support approx 300-400 concurrent users.  We would need more hardware to support additional user load. Q: What's the overview or summary of feedback from the users interacting with the site?A: LADBS has a wide spectrum of customers, from simple users like homeowners to large construction firms. Anything new that we offer could be a little bit challenging for some, but overall, the customers liked it. They saw a huge improvement on the usability. Q: Can you describe the impressions about the site before and after the project within LADBS?A: The old site was using old technology and it was hard for us to keep on building into it as we got more business requirements. It made our application seem a bit complicated.  It was confusing for our new customers to use and we've improved on this with the new site. It's now easier for them to complete their transactions and, at the same time, allowed us to provide more useful information. Sheetal Paranjpye and Rajiv Desai, 3Di Q: Did you run into any obstacles when implementing the solution?A: Yes we did run into some obstacles. One of the key show stoppers was the issue with portlet to portal communication. The GIS viewer (portlet) needed information to be passed  to and from Permit LA (Portal), but we were able to get everything configured and up and working quickly! Q: Was there a lot of custom work that needed to be done for this particular solution?A: We have done some customizations where workflows/ Task flows are involved.  Q: What do you think were the keys to success for rolling out WebCenter?A: Having a service oriented architecture and using portlets have been the key areas for rolling out Oracle WebCenter at LADBS. The Oracle WebCenter Content integration allows the flexibility to business users to maintain the content, which has really cut down on the reliance of IT, and employee productivity has increased as a result. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action! Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >