Search Results

Search found 12038 results on 482 pages for 'controller actions'.

Page 87/482 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • SCCM - How to make new deployed applications appear in Software Center faster?

    - by icehac
    I am trying to make the process between SCCM deployments and the Software Center (configmgr) faster, if not seamless. Right now, applications generally take about 1-2 hours to populate properly. However, by going to the "Configuration Manager" under the Windows Control Panel, there is an "Actions" tab. Generally 5 minutes after running these "Actions", the software will populate inside the Software Center. The downside of this is the user interaction with the "Actions" pane...I can't have a user going through this process when they request a new application that needs to be deployed through SCCM. I have have played around with using "net stop ccmexec" and "net start ccmexec" to manually run all of these "Actions" on the start command, but it feels a bit archaic. Does anyone have any suggestions how to speed this process up? I feel there is something simple I am missing.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Organising levels / rooms in a MUD-style text based world

    - by Polynomial
    I'm thinking of writing a small text-based adventure game, but I'm not particularly sure how I should design the world from a technical standpoint. My first thought is to do it in XML, designed something like the following. Apologies for the huge pile of XML, but I felt it important to fully explain what I'm doing. <level> <start> <!-- start in kitchen with empty inventory --> <room>Kitchen</room> <inventory></inventory> </start> <rooms> <room> <name>Kitchen</name> <description>A small kitchen that looks like it hasn't been used in a while. It has a table in the middle, and there are some cupboards. There is a door to the north, which leads to the garden.</description> <!-- IDs of the objects the room contains --> <objects> <object>Cupboards</object> <object>Knife</object> <object>Batteries</object> </objects> </room> <room> <name>Garden</name> <description>The garden is wild and full of prickly bushes. To the north there is a path, which leads into the trees. To the south there is a house.</description> <objects> </objects> </room> <room> <name>Woods</name> <description>The woods are quite dark, with little light bleeding in from the garden. It is eerily quiet.</description> <objects> <object>Trees01</object> </objects> </room> </rooms> <doors> <!-- a door isn't necessarily a door. each door has a type, i.e. "There is a <type> leading to..." from and to are references the rooms that this door joins. direction specifies the direction (N,S,E,W,Up,Down) from <from> to <to> --> <door> <type>door</type> <direction>N</direction> <from>Kitchen</from> <to>Garden</to> </door> <door> <type>path</type> <direction>N</direction> <from>Garden</type> <to>Woods</type> </door> </doors> <variables> <!-- variables set by actions --> <variable name="cupboard_open">0</variable> </variables> <objects> <!-- definitions for objects --> <object> <name>Trees01</name> <displayName>Trees</displayName> <actions> <!-- any actions not defined will show the default failure message --> <action> <command>EXAMINE</command> <message>The trees are tall and thick. There aren't any low branches, so it'd be difficult to climb them.</message> </action> </actions> </object> <object> <name>Cupboards</name> <displayName>Cupboards</displayName> <actions> <action> <!-- requirements make the command only work when they are met --> <requirements> <!-- equivilent of "if(cupboard_open == 1)" --> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>EXAMINE</command> <!-- fail message is the message displayed when the requirements aren't met --> <failMessage>The cupboard is closed.</failMessage> <message>The cupboard contains some batteires.</message> </action> <action> <requirements> <require operation="equal" value="0">cupboard_open</require> </requirements> <command>OPEN</command> <failMessage>The cupboard is already open.</failMessage> <message>You open the cupboard. It contains some batteries.</message> <!-- assigns is a list of operations performed on variables when the action succeeds --> <assigns> <assign operation="set" value="1">cupboard_open</assign> </assigns> </action> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>CLOSE</command> <failMessage>The cupboard is already closed.</failMessage> <message>You closed the cupboard./message> <assigns> <assign operation="set" value="0">cupboard_open</assign> </assigns> </action> </actions> </object> <object> <name>Batteries</name> <displayName>Batteries</displayName> <!-- by setting inventory to non-zero, we can put it in our bag --> <inventory>1</inventory> <actions> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>GET</command> <!-- failMessage isn't required here, it'll just show the usual "You can't see any <blank>." message --> <message>You picked up the batteries.</message> </action> </actions> </object> </objects> </level> Obviously there'd need to be more to it than this. Interaction with people and enemies as well as death and completion are necessary additions. Since the XML is quite difficult to work with, I'd probably create some sort of world editor. I'd like to know if this method has any downfalls, and if there's a "better" or more standard way of doing it.

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • Heavily customized split view controller in iPad app -- how?

    - by Macatomy
    I was going through some of the early screenshots of the first iPad apps and I came upon this: I'm just wondering, how is this done? Mainly, how has the detail view section of the split view been given a drop shadow and rounded corners, and for lack of better phrasing, how has it been "separated" from the master view (the default split view template has the master and detail view joined together with nothing but a vertical line separating the two)?

    Read the article

  • How to dismiss the table view created in the root view controller in iphone?

    - by Warrior
    I am new to iphone development. I have created a navigation based application. In the rootviewcontroller class i have written the code for parsing a url and display the contents in the table.In the didSelectRowAtIndexPath method i have written code for playing the video. NSURL *movieURL = [NSURL URLWithString:@"http://url of movie"]; if (movieURL) { if ([movieURL scheme]) { MovietryAppDelegate *appDelegate = (MovietryAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate initAndPlayMovie:movieURL]; } } In the delegate class i have defined the initAndPlayMovie method.I used the Apple sample "movie player " for my reference .I want to dismiss the table view before the video is loaded and table view appear back as presentmodalcontroller on clicking "done" button on video player.I want my app to work like you tube application. Please help me out.Thanks.

    Read the article

  • Does COM interop respect .NET AppDomain boundaries for assembly loading?

    - by Xiaofu
    Here's the core problem: I have a .NET application that is using COM interop in a separate AppDomain. The COM stuff seems to be loading assemblies back into the default domain, rather than the AppDomain from which the COM stuff is being called. What I want to know is: is this expected behaviour, or am I doing something wrong to cause these COM related assemblies to be loaded in the wrong AppDomain? Please see a more detailed description of the situation below... The application consists of 3 assemblies: - the main EXE, the entry point of the application. - common.dll, containing just an interface IController (in the IPlugin style) - controller.dll, containing a Controller class that implements IController and MarshalByRefObject. This class does all the work and uses COM interop to interact with another application. The relevant part of the main EXE looks like this: AppDomain controller_domain = AppDomain.CreateDomain("Controller Domain"); IController c = (IController)controller_domain.CreateInstanceFromAndUnwrap("controller.dll", "MyNamespace.Controller"); result = c.Run(); AppDomain.Unload(controller_domain); The common.dll only contains these 2 things: public enum ControllerRunResult{FatalError, Finished, NonFatalError, NotRun} public interface IController { ControllerRunResult Run(); } And the controller.dll contains this class (which also calls the COM interop stuff): public class Controller: IController, MarshalByRefObject When first running the application, Assembly.GetAssemblies() looks as expected, with common.dll being loaded in both AppDomains, and controller.dll only being loaded into the controller domain. After calling c.Run() however I see that assemblies related to the COM interop stuff have been loaded into the default AppDomain, and NOT in the AppDomain from which the COM interop is taking place. Why might this be occurring? And if you're interested, here's a bit of background: Originally this was a 1 AppDomain application. The COM stuff it interfaces with is a server API which is not stable over long periods of use. When a COMException (with no useful diagnostic information as to its cause) occurs from the COM stuff, the entire application has to restarted before the COM connection will work again. Simply reconnecting to the COM app server results in immediate COM exceptions again. To cope with this I have tried to move the COM interop stuff into a seperate AppDomain so that when the mystery COMExceptions occur I can unload the AppDomain in which it occurs, create a new one and start again, all without having to manually restart the application. That was the theory anyway...

    Read the article

  • Invoke an AsyncController Action from within another Controller Action?

    - by Luis
    Hi, I'd like to accomplish the following: class SearchController : AsyncController { public ActionResult Index(string query) { if(!isCached(query)) { // here I want to asynchronously invoke the Search action } else { ViewData["results"] = Cache.Get("results"); } return View(); } public void SearchAsync() { // some work Cache.Add("results", result); } } I'm planning to make an AJAX 'ping' from the client in order to know when the results are available, and then display them. But I don't know how to invoke the asynchronous Action in an asynchronous way! Thank you very much. Luis

    Read the article

  • MVC with cocoa/objective-c

    - by Leonardo
    Hi all, I have a strong j2ee background, and I am trying to move to objective-c for some desktop/iphone programming. I used many java web frameworks with mvc in mind, spring and struts ecc... so I am used to have servlet or controller which pass attributes to jsp pages, which is the view. In jsp pages with jstl you can call this attribute and render to video. In this way controller and view are (in theory) clearly separated. With xcode, I can easily recognize the controller and the view built with IBuilder. All the tutorial I found, shown the controller which go and change directly labels or text fields. So my two questions: seems to me that there's no separation between the two (controller and view), where I am wrong in that ? is there a way for a controller to pack all objects in a kind of context in a j2ee way and have the view read that context ? thanks Leonardo

    Read the article

  • How to change number of tabs in tabbar controller application ?

    - by hib
    Hi I am developing an iPhone tabbar application with 5 tabs . I want to show only two tabs at the launch time such as one is "locate me". When the user taps on the locate me tab another 3 tabs will be shown and can use the current location. I want to do some thing like "urban spoon" . I am using the interface builder for all the stuff. If any one have any idea , suggestion , links then provide me. Thanks .

    Read the article

  • How do you get Client IP Address in Grails controller?

    - by Andrew
    I had code like this in Ruby @clientipaddress = request.env["HTTP_CLIENT_IP"] if (@clientipaddress == nil) @clientipaddress = request.env["HTTP_X_FORWARDED_FOR"] end if (@clientipaddress == nil) @clientipaddress = request.env["REMOTE_ADDR"] end if (@clientipaddress != nil) comma = @clientipaddress.index(",") if (comma != nil && comma >= 0) @clientipaddress = @clientipaddress[0, comma] end end It took care of all the possible ways that the ip might show up. For instance, on my local development machine there is no proxy. But in QA and Production the proxies are there and sometimes they provide more than one address. I don't need to know the groovy syntax. Just which methods get me the equivalent of the three different ways I ask for the ip above.

    Read the article

  • Rails routes direct index action to show action

    - by jspooner
    So I created some rspec_scaffold for an Exercise model and added "map.resource :exercises" to my routes file and I was surprised when the "/exercises" url rendered the show action. What the heck? Why doesn't that render the index action? rake routes new_exercises GET /exercises/new(.:format) {:controller=>"exercises", :action=>"new"} edit_exercises GET /exercises/edit(.:format) {:controller=>"exercises", :action=>"edit"} exercises GET /exercises(.:format) {:controller=>"exercises", :action=>"show"} PUT /exercises(.:format) {:controller=>"exercises", :action=>"update"} DELETE /exercises(.:format) {:controller=>"exercises", :action=>"destroy"} POST /exercises(.:format) {:controller=>"exercises", :action=>"create"}

    Read the article

  • Get HDD (and NOT Volume) Serial Number on Vista Ultimate 64 bit

    - by TheAgent
    Hi all. I was once looking for getting the HDD serial number without using WMI, and I found it. The code I found and posted on StackOverFlow.com works very well on 32 bit Windows, both XP and Vista. The trouble only begins when I try to get the serail number on 64 bit OSs (Vista Ultimate 64, specifically). The code returns String.Empty, or a Space all the time. Anyone got an idea how to fix this? EDIT: I used the tools Dave Cluderay suggested, with interesting results: Here is the output from DiskId32, on Windows XP SP2 32-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using the SCSI back door Drive 4 - Tertiary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [] Product Id = [MAXTOR STM3160215AS] Product Revision = [3.AAD] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Hard Drive Serial Number__________: 6RA26XK3 Hard Drive Model Number___________: MAXTOR STM3160215AS And DiskId32 run on Windows Vista Ultimate 64-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Trying to read the drive IDs using the SCSI back door Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [MAXTOR S] Product Id = [TM3160215AS] Product Revision = [3.AA] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Hard Drive Serial Number__________: Hard Drive Model Number___________: Notice how much lesser the information is on Vista, and how the Serial Number is not returned. Also the other tool, EnumDisk, refers to my hard disks on Vista as "SCSI" as opposed to "ATA" on Windows XP. Any ideas? EDIT 2: I'm posting the results from EnumDisks: On Windows XP SP2 32-bit: Properties for Device 1 Device ID: IDE\DiskMAXTOR_STM3160215AS_____________________3.AAD___ Adapter Properties ------------------ Bus Type : ATA Max. Tr. Length: 0x20000 Max. Phy. Pages: 0xffffffff Alignment Mask : 0x1 Device Properties ----------------- Device Type : Direct Access Device (0x0) Removable Media : No Product ID : MAXTOR STM3160215AS Product Revision: 3.AAD Inquiry Data from Pass Through ------------------------------ Device Type: Direct Access Device (0x0) Vendor ID : MAXTOR S Product ID : TM3160215AS Product Rev: 3.AA Vendor Str : *** End of Device List

    Read the article

  • collect2: ld returned 1 exit status error in Xcode

    - by user573949
    Hello, Im getting the error Command /Developer/usr/bin/gcc-4.2 failed with exit code 1 and when the full log is opened, the error is more accurately listed as: collect2: ld returned 1 exit status from this simple Cocoa script: #import "Controller.h" @implementation Controller int skillcheck (int level, int modifer, int difficulty) { if (level + modifer >= difficulty) { return 1; } if (level + modifer <= difficulty) { return 0; } } int main () { skillcheck(10, 2, 10); } @end the .h file is this: // // Controller.h // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import <Cocoa/Cocoa.h> @interface Controller : NSObject { int skillcheck; int contestcheck; } @end and no line was specified that the error came from, does anyone know what the source of this error is, and more importantly, how to fix it? EDIT: I removed the class so now I have this: // // Controller.m // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import "Controller.h" int skillcheck (int level, int modifer, int difficulty) { if (level + modifer >= difficulty) { return 1; } if (level + modifer <= difficulty) { return 0; } } int main () { skillcheck(10, 2, 10); } and for the .h file: // // Controller.h // // Created by Duo Oratar on 15/01/2011. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import <Cocoa/Cocoa.h> and the log says: (thanks to the guy who said how to open it) Ld build/Debug/Calculator.app/Contents/MacOS/Calculator normal x86_64 cd /Users/kids/Desktop/Calculator setenv MACOSX_DEPLOYMENT_TARGET 10.6 /Developer/usr/bin/gcc-4.2 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -L/Users/kids/Desktop/Calculator/build/Debug -F/Users/kids/Desktop/Calculator/build/Debug -filelist /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Calculator.LinkFileList -mmacosx-version-min=10.6 -framework Cocoa -o /Users/kids/Desktop/Calculator/build/Debug/Calculator.app/Contents/MacOS/Calculator ld: duplicate symbol _main in /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Controller.o and /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/main.o collect2: ld returned 1 exit status Command /Developer/usr/bin/gcc-4.2 failed with exit code 1 ld: duplicate symbol _main in /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/Controller.o and /Users/kids/Desktop/Calculator/build/Calculator.build/Debug/Calculator.build/Objects-normal/x86_64/main.o Command /Developer/usr/bin/gcc-4.2 failed with exit code 1

    Read the article

  • Avoiding EXC_BAD_ACCESS when using the delegate pattern

    - by Kenny Winker
    A have a view controller, and it creates a "downloader" object, which has a reference to the view controller (as a delegate). The downloader calls back the view controller if it successfully downloads the item. This works fine as long as you stay on the view, but if you navigate away before the download is complete I get EXC_BAD_ACCESS. I understand why this is happening, but is there any way to check if an object is still allocated? I tried to test using delegate != nil, and [delegate respondsToSelector:], but it chokes. if (!self.delegate || ![self.delegate respondsToSelector:@selector(downloadComplete:)]) { // delegate is gone, go away quietly [self autorelease]; return; } else { // delegate is still around [self.delegate downloadComplete:result]; } I know I could, a) have the downloader objects retain the view controller b) keep an array of downloaders in the view controller, and set their delegate values to nil when I deallocate the view controller. But I wonder if there is an easier way, where I just test if the delegate address contains a valid object?

    Read the article

  • Can a View Controller manage more than 1 nib based view?

    - by Hugo Brynjar
    I have a VC controlling a screen of content that has 2 modes; a normal mode and an edit mode. Can I create a single VC with 2 views, each from separate nibs? In many situations on the iphone, you have a VC which controls an associated view. Then on a button press or other event, a new VC is loaded and its view becomes the top level view etc. But in this situation, I have 2 modes that I want to use the same VC for, because they are closely related. So I want a VC which can swap in/out 2 views. As per here: http://stackoverflow.com/questions/863321/iphone-how-to-load-a-view-using-a-nib-file-created-with-interface-builder/2683153#2683153 I have found that I can load a VC with an associated view from a nib and then later on load a different view from another nib and make that new view the active view. NSArray *nibObjects = [[NSBundle mainBundle] loadNibNamed:@"EditMode" owner:self options:nil]; UIView *theEditView = [nibObjects objectAtIndex:0]; self.editView = theEditView; [self.view addSubview:theEditView]; The secondary nib has outlets wired up to the VC like the primary nib. When the new nib is loaded, the outlets are all connected up fine and everything works nicely. Unfortunately when this edit view is then removed, there doesn't seem to be any elegant way of getting the outlets hooked up again to the (normal mode) view from the original nib. Nib loading and outlet setting seems a once only thing. So, if you want to have a VC that swaps in/out 2 views without creating a new VC, what are the options? 1) You can do everything in code, but I want to use nibs because it makes creating the UI simpler. 2) You have 1 nib for your VC and just hide/show elements using the hidden property of UIView and its subclasses. 3) You load a new nib as described above. This is fine for the new nib, but how do you sort the outlets when you go back to the original nib. 4) Give up and accept having a 1:1 between VCs and nibs. There is a nib for normal mode, a nib for edit mode and each mode has a VC that subclasses a common superclass. In the end, I went with 4) and it works, but requires a fair amount of extra work, because I have a model class that I instantiate in normal mode and then have to pass to the edit mode VC because both modes need access to the model. I'm also using NSTimer and have to start and stop the timer in each mode. It is because of all this shared functionality that I wanted a single VC with 2 nibs in the first place.

    Read the article

  • ASP.NET MVC 2 - do UrlParameter.Optional entries have to be at the end of the route?

    - by David M
    I am migrating a site from ASP.NET MVC 1 to ASP.NET MVC 2. At the moment, the site supports the following routes: /{country}/{language}/{controller}/{action} /{country}/{controller}/{action} /{language}/{controller}/{action} /{controller}/{action} The formats for country and language are distinguishable by Regex and have suitable constraints. In MVC 1, I registered each of these as a separate route - for each of around 20 combinations. In MVC 2, I have been trying to get the same thing working with one route to cover all four cases, using UrlParameter.Optional, but I can't seem to get it working - if I define country and language as both optional, then the route /Home/Index for example doesn't successfully match the route. This is what I am trying to do: routes.MapRoute("Default", "{country}/{language}/{controller}/{action}", new { country = UrlParameter.Optional, language = UrlParameter.Optional, controller = "Home", action = "Index" }, new { country = COUNTRY_REGEX, language = LANGUAGE_REGEX }); Is this just impossible because my optionals are at the beginning of the route, or am I just missing something? I can't find any documentation to either tell me what I am doing is impossible or to point me in the right direction.

    Read the article

  • How switch UIViewController by flip from 3 controllers ?

    - by Sergey
    Hello, all. I've got next problem: i've got 3 UIViewController and need to switch controllers by UIModalTransitionStyleFlipHorizontal. Next code working fine: testController.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:testController animated:YES]; where testController - next controller to switch, self - current controller. And when i've set 3rd controller - it's working fine, but i have problem with back to first controller - when i back to first controller - i can't go to second controller. 1 - 2 - 3 - 1 - it's working fine, but - 2 - don't show. what's mistake ?

    Read the article

  • How do I set a default page in Pylons?

    - by Evgeny
    I've created a new Pylons application and added a controller ("main.py") with a template ("index.mako"). Now the URL http://myserver/main/index works. How do I make this the default page, ie. the one returned when I browse to http://myserver/ ? I've already added a default route in routing.py: def make_map(): """Create, configure and return the routes Mapper""" map = Mapper(directory=config['pylons.paths']['controllers'], always_scan=config['debug']) map.minimization = False # The ErrorController route (handles 404/500 error pages); it should # likely stay at the top, ensuring it can always be resolved map.connect('/error/{action}', controller='error') map.connect('/error/{action}/{id}', controller='error') # CUSTOM ROUTES HERE map.connect('', controller='main', action='index') map.connect('/{controller}/{action}') map.connect('/{controller}/{action}/{id}') return map I've also deleted the contents of the public directory (except for favicon.ico), following the answer to http://stackoverflow.com/questions/1279403/default-route-doesnt-work Now I just get error 404. What else do I need to do to get such a basic thing to work?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >