Search Results

Search found 39751 results on 1591 pages for 'add'.

Page 523/1591 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Microsoft Ajax Control Toolkit vs. jQuery

    - by Juri
    Hi, we are currently developing a couple of custom asp.net server controls. Now we'd like to add some Ajax support to some of them. Now basically there would be two options Microsoft Ajax & Microsoft Ajax Control Toolkit jQuery I worked already with the Control Toolkit, writing a complete Extender and it was quite intuitive, once you understand the story behind. But I also like the simplicity of jQuery. So I'd like to hear some of you what you would like to go for (advantages/disadvantages of each of them), considering also that we're mainly dealing with Microsoft technologies. Would you go more for the toolkit or jQuery,...or both? //Edit: I just made some tests and I have to admit that at the moment I find the Toolkit better due to the integration. My purpose is mainly for using it on the server controls, so with the toolkit I have corresponding classes on the server-side where I can do something like CalendarExtender toolkitCalendarExtender = new CalendarExtender(); toolkitCalendarExtender.TargetControlID.... ... this.Controls.Add(toolkitCalendarExtender); This is really nice because in this way I don't have to deal with rendering predefined JavaScript which I construct somehow as string inside my custom server control. With jQuery I would have to do so (except for the toolkit Nicolas mentioned, but the support there is too weak for using it in a professional environment) Thanks a lot.

    Read the article

  • Looking for a good world map generation algorithm

    - by FalconNL
    I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet. One option is to generate a heightmap using perlin noise and add water at a level so that about 30% of the world is land. While perlin noise (or similar fractal-based techniques) are frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective. See http://farm3.static.flickr.com/2792/4462870263_ff26c40365_o.jpg for an example (sorry, can't post pictures yet). A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents. See http://farm5.static.flickr.com/4069/4462870383_46e86b155c_o.jpg for an example. Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?

    Read the article

  • Send Data Using the WebRequest Class to DotNetOpenAuth website

    - by Denis
    I am trying to send data to DotNetOpenAuth website as described here http://msdn.microsoft.com/en-us/library/debx8sh9.aspx Sender receive (500) Internal Server Error. The same code for blank website without DotNetOpenAuth works fine. Should I tweak something? Here is an exception: System.ArgumentNullException was unhandled by user code Message="Value cannot be null.\r\nParameter name: key" Source="mscorlib" ParamName="key" StackTrace: at System.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.Collections.Generic.Dictionary`2.Add(TKey key, TValue value) at DotNetOpenAuth.OAuth.ChannelElements.OAuthChannel.ReadFromRequestCore(HttpRequestInfo request) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OAuth\ChannelElements\OAuthChannel.cs:line 145 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\Messaging\Channel.cs:line 372 at DotNetOpenAuth.OAuth.ServiceProvider.ReadRequest(HttpRequestInfo request) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OAuth\ServiceProvider.cs:line 222 Exception occurs on last line of the code: private void context_AuthenticateRequest(object sender, EventArgs e) { // Don't read OAuth messages directed at the OAuth controller or else we'll fail nonce checks. if (this.IsOAuthControllerRequest()) { return; } if (HttpContext.Current.Request.HttpMethod != "HEAD") { // workaround: avoid involving OAuth for HEAD requests. IDirectedProtocolMessage incomingMessage = OAuthServiceProvider.ServiceProvider.ReadRequest(new HttpRequestInfo(this.application.Context.Request));

    Read the article

  • Dynamics CRM Customer Portal Accelerator Installation

    - by saturdayplace
    (I've posted this question on the codeplex forums too, but have yet to get a response) I've got an on-premise installation of CRM and I'm trying to hook the portal to it. My connection string in web.config: <connectionStrings> <add name="Xrm" connectionString="Authentication Type=AD; Server=http://myserver:myport/MyOrgName; User ID=mydomain\crmwebuser; Password=thepassword" /> </connectionStrings> And my membership provider: <membership defaultProvider="CustomCRMProvider"> <providers> <add connectionStringName="Xrm" applicationName="/" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" name="CustomCRMProvider" type="System.Web.Security.SqlMembershipProvider" /> </providers> </membership> Now, I'm super new to MS style web development, so please help me if I'm missing something. In Visual Studio 2010, when I go to Project ASP.NET Configuration it launches the Web Site Administration Tool. When I click the Security Tab there, I get the following error: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: An error occurred while attempting to initialize a System.Data.SqlClient.SqlConnection object. The value that was provided for the connection string may be wrong, or it may contain an invalid syntax. Parameter name: connectionString I can't see what I'm doing wrong here. Does the user mydomain\crmwebuser need certain permissions in the SQL database, or somewhere else? edit: On the home page of the Web Site Administration Tool, I have the following: **Application**:/ **Current User Name**:MACHINENAME\USERACCOUNT Which is obviously a different set of credentials than mydomain\crmwebuser. Is this part of the problem?

    Read the article

  • WPF DataGrid HeaderTemplate Mysterious Padding

    - by Jake Wharton
    I am placing a single button with an image in the header of a column of a DataGrid. The cell template is also just a simple button with an image. <my:DataGridTemplateColumn> <my:DataGridTemplateColumn.HeaderTemplate> <DataTemplate> <Button ToolTip="Add New Template" Name="AddNewTemplate" Click="AddNewTemplate_Click"> <Image Source="../Resources/add.png"/> </Button> </DataTemplate> </my:DataGridTemplateColumn.HeaderTemplate> <my:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button ToolTip="Edit Template" Name="EditTemplate" Click="EditTemplate_Click" Tag="{Binding}"> <Image Source="../Resources/pencil.png"/> </Button> </DataTemplate> </my:DataGridTemplateColumn.CellTemplate> </my:DataGridTemplateColumn> When rendered, the header has approximately 10-15px of padding on the right side only which causes the cells to obviously render at that width leaving the cell button having empty space on both sides. Being a pixel-perfectionist this annoys the hell out of me. I had initially thought that it was space for the arrows displayed when sorted but I have sorting disabled on both the entire DataGrid and explicitly on the column. Here's a image: I assume this is padding form whatever is the parent element of the button. Does anyone know a way to eliminate it?

    Read the article

  • iPhone TableView alternative options for Check Mark Accessory

    - by jimsis
    I'm looking for the best approach to this problem. In my tableview I have a list of options from which you can select one and only one. The problem is the selection to choose is not obvious without displaying more details on the option. If I use the disclosure indicator or button for the more detail, I lose the checkmark functionality. In searching around I see some have used the cell Image as a work around. I see others instead of using the standard disclosure button have created custom disclosure button looking like a checkmark. Haven't seen this one but is it viable (HIG) to add a button in the cell ('more info') to launch the next tableview. My thought was to use a disclosure indicator and on the second view in the navigation bar (where the edit button usually is) add a 'selectMe' button. I think I am probably manage to code either of the above, am just asking for information on what is the best (HIG) way. Example Option 1 Option 2 Option 3 (x) Option 4 Where x is the checked one But in order to know which is the best choice you need to see Option 3 (Header) Option 3-a Option 3-b Option 3-c Option 3-d Where even at this level option 3-c might have additional information. Any guidance you can provide would appreciated.

    Read the article

  • autoresizingMask changes the size of the UILabel being drawn, just by being set

    - by Kojiro
    I have some custom UITableViewCells that are made programmatically as needed, I want these to resize. However, when I add autoresizingMasks to the UILabels in the cells, they all seem to stretch wider while anchoring to the left side. // This works fine UILabel *aField = [[UILabel alloc] initWithFrame:CGRectMake(60, 2, tableView.frame.size.width - 83, 21)]; UILabel *bField = [[UILabel alloc] initWithFrame:CGRectMake(60, 20, tableView.frame.size.width - 154, 21)]; UILabel *cField = [[UILabel alloc] initWithFrame:CGRectMake(0, 2, tableView.frame.size.width, 21)]; UILabel *dField = [[UILabel alloc] initWithFrame:CGRectMake(tableView.frame.size.width - 116, 11, 93, 21)]; UILabel *eField = [[UILabel alloc] initWithFrame:CGRectMake(tableView.frame.size.width - 116, 11, 93, 21)]; // But when I add this, it draws like the tableview is actually much wider than it really is aField.autoresizingMask = UIViewAutoresizingFlexibleWidth; bField.autoresizingMask = carrierField.autoresizingMask; cField.autoresizingMask = UIViewAutoresizingFlexibleWidth; dField.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin; eField.autoresizingMask = priceField.autoresizingMask; So when the bottom section of code doesn't exist, everything works as expected, but when it does, a lot of the labels start falling off the right side or being stretched to where their centers are far to the right. Am I overlooking something simple?

    Read the article

  • Help/Questions About New Team Foundation Server 2010 Installation

    - by user579218
    Hello. Before starting down the TFS2010 installation process, I have a few questions I'm hoping the community can help me with. We're planning on a single-server installation of TFS2010. Initially, we want version/source control and build services, but not reporting or SharePoint. We may add reporting and SharePoint capabilities later. Our environment will be Windows Server 2008 R2 (x64), SQL Server 2008 R2 (x64), Office 2010 (x86), Visual Studio 6 and 2010, and, of course, Team Foundation Server 2010. Can I install TFS2010 on a server that is on our domain? It's not a domain controller, it's just a member server on the domain. Should I install TFS2010 before or after putting the server on the domain? We have six developers that will be logging into their local development computers (which are also on the same domain) using their domain user accounts, do I add each domain user to the TFS2010 server's security groups? If so, which one(s)? Can I or should I use a domain user account as the TFS2010 service account? Or, should I just use Network Service? The TFS2010 install guide notes that none of the service accounts should belong to the Administrators security group, so which security group(s) are recommended for the service account(s)? We're planning on using a local instance of SQL Server 2008 R2 Standard with TFS2010, what service account should we use? Should we use the same domain account as TFS2010 or Local System or ?? The TFS2010 install guide isn't very specific on this. Since we're planning on this server being both the version/source control and build server, should we install our development environments (VS6, VS2010, Access2010) before installing TFS2010? Or does it matter? Thanks in advance for answering these questions.

    Read the article

  • asp.net mvc custom membership provider - define application

    - by ile
    I created custom membership provider for asp.net mvc applications and it all works fine except one thing: when logged in to my application, I am also logged in to all other asp.net mvc applications that I run using Visual Studio. I suppose this data is being pulled from cache because when I logout and try to login again in other application, I'm being rejected. In webconfig, I added applicationName in order to solve this but it didn't work: <membership defaultProvider="SAMembershipProvider" userIsOnlineTimeWindow="15"> <providers> <clear/> <add name="SAMembershipProvider" type="ShinyAnt.Membership.SAMembershipProvider, ShinyAnt" connectionStringName ="ShinyAntConnectionString" applicationName="MyApp" /> </providers> </membership> <roleManager defaultProvider="SARoleProvider" enabled="true" cacheRolesInCookie="true"> <providers> <clear/> <add name="SARoleProvider" type="ShinyAnt.Membership.SARoleProvider" connectionStringName ="ShinyAntConnectionString" applicationName="MyApp" /> </providers> </roleManager> Is there any method that I forgot to implement that is dealing with this problem or it is something else?

    Read the article

  • Help me understand these generic method warnings

    - by Raj
    Folks, I have a base class, say: public class BaseType { private String id; ... } and then three subclasses: public class TypeA extends BaseType { ... } public class TypeB extends BaseType { ... } public class TypeC extends BaseType { ... } I have a container class that maintains lists of objects of these types: public class Container { private List<TypeA> aList; private List<TypeB> bList; private List<TypeC> cList; // finder method goes here } And now I want to add a finder method to container that will find an object from one of the lists. The finder method is written as follows: public <T extends BaseType> T find( String id, Class<T> clazz ) { final List<T> collection; if( clazz == TypeA.class ) { collection = (List<T>)aList; } else if( clazz == TypeB.class ) { collection = (List<T>)bList; } else if( clazz == TypeC.class ) { collection = (List<T>)cList; } else return null; for( final BaseType value : collection ) { if( value.getId().equals( id ) ) { return (T)value; } } return null; } My question is this: If I don't add all the casts to T in my finder above, I get compile errors. I think the compile should be able to infer the types based on parametrization of the generic method (). Can anyone explain this? Thanks. -Raj

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • Multiple overlay items in android

    - by Bostjan
    I seem to be having a problem with using ItemizedOverlay and OveralyItems in it. I can get the first overlayItem to appear on the map but not any items after that. Code sample is on: http://www.anddev.org/multiple_overlay_items-t12171.html Quick overview here: public class Markers extends ItemizedOverlay { private Context ctx; private ArrayList<OverlayItem> mOverlays = new ArrayList<OverlayItem>(); public Markers(Drawable defaultMarker, Context cont) { super(boundCenterBottom(defaultMarker)); this.ctx = cont; // TODO Auto-generated constructor stub } @Override protected OverlayItem createItem(int i) { // TODO Auto-generated method stub return mOverlays.get(i); } @Override public boolean onTap(GeoPoint p, MapView mapView) { // TODO Auto-generated method stub return super.onTap(p, mapView); } @Override protected boolean onTap(int index) { // TODO Auto-generated method stub Toast.makeText(this.ctx, mOverlays.get(index).getTitle().toString()+", Latitude: "+mOverlays.get(index).getPoint().getLatitudeE6(), Toast.LENGTH_SHORT).show(); return super.onTap(index); } @Override public int size() { // TODO Auto-generated method stub return mOverlays.size(); } public void addOverlay(OverlayItem item) { mOverlays.add(item); setLastFocusedIndex(-1); populate(); } public void clear() { mOverlays.clear(); setLastFocusedIndex(-1); populate(); }} Markers usersMarker = new Markers(user,overview.this); GeoPoint p = new GeoPoint((int) (lat * 1E6),(int) (lon * 1E6)); OverlayItem item = new OverlayItem(p,userData[0],userData[3]); item.setMarker(this.user); usersMarker.addOverlay(item); The lines after the class are just samples of how it's used the first marker shows up on the map but if I add any more they don't show up? Is there a problem with the populate() method? I tried calling it manually after adding all markers but it still didn't help. Please, if you have any idea what could be wrong, say so.

    Read the article

  • Custumizing Syntax Highlighting in Vim

    - by sixtyfootersdude
    Hey I have defined a few custom file types for vim. I have done this like this: In vimrc: au BufWinEnter,BufRead,BufNewFile *.jak set filetype=jak And then in jak.vim syn match arrows /<-/ syn match arrows /->/ syn match arrows /=>/ syn match arrows /<=/ highlight arrows ctermfg=brown ... This works. (Formatting applies to any file opened with vim with file extension .jak) My question is how I can keep all the current formatting for a file type but add functionality. For example I would like to add this functionality for .vim files: syn keyword yellow yellow highlight yellow ctermfg=yellow ... (so that I can see how my terminal interpenetrates different colors before choosing them.) I have created ~/.vim/syntax/vim.vim (file only contains the above) and put this into my vimrc: au BufWinEnter,BufRead,BufNewFile *.vim set filetype=vim This has no effect. The word yellow is not colored yellow. I have also tried putting my vim.vim file into ~/.vim/after/syntax/vim.vim As suggested here This is the approach that I would like to take. Seems clean and easily maintainable.

    Read the article

  • How do I send automated e-mails from Drupal using Messaging and Notifications?

    - by Adrian
    I am working on a Notifications plugin, and after starting to write my notes down about how to do this, decided to just post them here. Please feel free to come make modifications and changes. Eventually I hope to post this on the Drupal handbook as well. Thanks. --Adrian Sending automated e-mails from Drupal using Messaging and Notifications To implement a notifications plugin, you must implement the following functions: Use hook_messaging, hook_token_list and hook_token_values to create the messages that will be sent. Use hook_notifications to create the subscription types Add code to fire events (eg in hook_nodeapi) Add all UI elements to allow users to subscribe/unsubscribe Understanding Messaging The Messaging module is used to compose messages that can be delivered using various formats, such as simple mail, HTML mail, Twitter updates, etc. These formats are called "send methods." The backend details do not concern us here; what is important are the following concepts: TOKENS: tokens are provided by the "tokens" module. They allow you to write keywords in square brackets, [like-this], that can be replaced by any arbitrary value. Note: the token groups you create must match the keys you add to the $events-objects[$key] array. MESSAGE KEYS: A key is a part of a message, such as the greetings line. Keys can be different for each send method. For example, a plaintext mail's greeting might be "Hi, [user]," while an HTML greeing might be "Hi, [user]," and Twitter's might just be "[user-firstname]: ". Keys can have any arbitrary name. Keys are very simple and only have a machine-readable name and a user-readable description, the latter of which is only seen by admins. MESSAGE GROUPS: A group is a bunch of keys that often, but not always, might be used together to make up a complete message. For example, a generic group might include keys for a greeting, body, closing and footer. Groups can also be "subclassed" by selecting a "fallback" group that will supply any keys that are missing. Groups are also associated with modules; I'm not sure what these are used for. Understanding Notifications The Notifications module revolves around the following concepts: SUBSCRIPTIONS: Notifications plugins may define one or more types of subscriptions. For example, notifications_content defines subscriptions for: Threads (users are notified whenever a node or its comments change) Content types (users are notified whenever a node of a certain type is created or is changed) Users (users are notified whenever another user is changed) Subscriptions refer to both the user who's subscribed, how often they wish to be notified, the send method (for Messaging) and what's being subscribed to. This last part is defined in two steps. Firstly, a plugin defines several "subscription fields" (through a hook_notifications op of the same name), and secondly, "subscription types" (also an op) defines which fields apply to each type of subscription. For example, notifications_content defines the fields "nid," "author" and "type," and the subscriptions "thread" (nid), "nodetype" (type), "author" (author) and "typeauthor" (type and author), the latter referring to something like "any STORY by JOE." Fields are used to link events to subscriptions; an event must match all fields of a subscription (for all normal subscriptions) to be delivered to the recipient. The $subscriptions object is defined in subsequent sections. Notifications prefers that you don't create these objects yourself, preferring you to call the notifications_get_link() function to create a link that users may click on, but you can also use notifications_save_subscription and notifications_delete_subscription to do it yourself. EVENTS: An event is something that users may be notified about. Plugins create the $event object then call notifications_event($event). This either sends out notifications immediately, queues them to send out later, or both. Events include the type of thing that's changed (eg 'node', 'user'), the ID of the thing that's changed (eg $node-nid, $user-uid) and what's happened to it (eg 'create'). These are, respectively, $event-type, $event-oid (object ID) and $event-action. Warning: notifications_content_nodeapi also adds a $event-node field, referring to the node itself and not just $event-oid = $node-nid. This is not used anywhere in the core notifications module; however, when the $event is passed back to the 'query' op (see below), we assume the node is still present. Events do not refer to the user they will be referred to; instead, Notifications makes the connection between subscriptions and events, using the subscriptions' fields. MATCHING EVENTS TO SUBSCRIPTIONS: An event matches a subscription if it has the same type as the event (eg "node") and if the event matches all the correct fields. This second step is determined by the "query" hook op, which is called with the $event object as a parameter. The query op is responsible for giving Notifications a value for all the fields defined by the plugin. For example, notifications_content defines the 'nid', 'type' and 'author' fields, so its query op looks like this (ignore the case where $event_or_user = 'user' for now): $event_or_user = $arg0; $event_type = $arg1; $event_or_object = $arg2; if ($event_or_user == 'event' && $event_type == 'node' && ($node = $event_or_object->node) || $event_or_user == 'user' && $event_type == 'node' && ($node = $event_or_object)) { $query[]['fields'] = array( 'nid' => $node->nid, 'type' => $node->type, 'author' => $node->uid, ); return $query; After extracting the $node from the $event, we set $query[]['fields'] to a dictionary defining, for this event, all the fields defined by the module. As you can tell from the presence of the $query object, there's way more you can do with this op, but they are not covered here. DIGESTING AND DEDUPING: Understanding the relationship between Messaging and Notifications Usually, the name of a message group doesn't matter, but when being used with Notifications, the names must follow very strict patterns. Firstly, they must start with the name "notifications," and then are followed by either "event" or "digest," depending on whether the message group is being used to represent either a single event or a group of events. For 'events,' the third part of the name is the "type," which we get from Notification's $event-type (eg: notifications_content uses 'node'). The last part of the name is the operation being performed, which comes from Notification's $event-action. For example: notifications-event-node-comment might refer to the message group used when someone comments on a node notifications-event-user-update to a user who's updated their profile Hyphens cannot appear anywhere other than to separate the parts of these words. For 'digest' messages, the third and fourth part of the name come from hook_notification's "event types" callback, specifically this line: $types[] = array( 'type' => 'node', 'action' => 'insert', ... 'digest' => array('node', 'type'), ); $types[] = array( 'type' => 'node', 'action' => 'update', ... 'digest' => array('node', 'nid'), ); In this case, the first event type (node insertion) will be digested with the notifications-digest-node-type message template providing the header and footer, likely saying something like "the following [type] was created." The second event type (node update) will be digested with the notifications-digest-node-nid message template. Data Structure and Callback Reference $event The $event object has the following members: $event-type: The type of event. Must match the type in hook_notification::"event types". {notifications_event} $event-action: The action the event describes. Most events are sorted by [$event-type][$event-action]. {notifications_event}. $event-object[$object_type]: All objects relevant to the event. For example, $event-object['node'] might be the node that the event describes. $object_type can come from the 'event types' hook (see below). The main purpose appears to be to be passed to token_replace_multiple as the second parameter. $event-object[$event-type] is assumed to exist in the short digest processing functions, but this doesn't appear to be used anywhere. Not saved in the database; loaded by hook_notifications::"event load" $event-oid: apparently unused. The id of the primary object relevant to this event (eg the node's nid). $event-module: apparently unused $event-params[$key]: Mainly a place for plugins to save random data. The main module will serialize the contents of this array but does not use it in any way. However, notifications_ui appears to do something weird with it, possibly by using subscriptions' fields as keys into this array. I'm not sure why though. hook_notifications op 'subscription types': returns an array of subscription types provided by the plugin, in the form $key = array(...) with the following members: event_type: this subscription can only match events whose $event-type has this value. Stored in the database as notifications.event_type for every individual subscription. Apparently, this can be overiden in code but I wouldn't try it (see notifications_save_subscription). fields: an unkeyed array of fields that must be matched by an event (in addition to the event_type) for it to match this subscription. Each element of this array must be a key of the array returned by op 'subscription fields' which in turn must be used by op 'query' to actually perform the matching. title: user-readable title for their subscriptions page (eg the 'type' column in user/%uid/notifications/subscriptions) description: a user-readable description. page callback: used to add a supplementary page at user/%uid/notifications/blah. This and the following are used by notifications_ui as a part of hook_menu_alter. Appears to be partially deprecated. user page: user/%uid/notifications/blah. op 'event types': returns an array of event types, with each event type being an array with the following members: type: this will match $event-type action: this will match $event-action digest: an array with two ordered (non-keyed) elements, "type" and "field." 'type' is used as an index into $event-objects. 'field' is also used to group events like so: $event-objects[$type]-$field. For example, 'field' might be 'nid' - if the object is a node, the digest lines will be grouped by node ID. Finally, both are used to find the correct Messaging template; see discussion above. description: used on the admin "Notifications-Events" page name: unused, use Messaging instead line: deprecated, use Messaging instead Other Stuff This is an example of the main query that inserts an event into the queue: INSERT INTO {notifications_queue} (uid, destination, sid, module, eid, send_interval, send_method, cron, created, conditions) SELECT DISTINCT s.uid, s.destination, s.sid, s.module, %d, // event ID s.send_interval, s.send_method, s.cron, %d, // time of the event s.conditions FROM {notifications} s INNER JOIN {notifications_fields} f ON s.sid = f.sid WHERE (s.status = 1) AND (s.event_type = '%s') // subscription type AND (s.send_interval >= 0) AND (s.uid <> %d) AND ( (f.field = '%s' AND f.intval IN (%d)) // everything from 'query' op OR (f.field = '%s' AND f.intval = %d) OR (f.field = '%s' AND f.value = '%s') OR (f.field = '%s' AND f.intval = %d)) GROUP BY s.uid, s.destination, s.sid, s.module, s.send_interval, s.send_method, s.cron, s.conditions HAVING s.conditions = count(f.sid)

    Read the article

  • django-mptt fields showing up twice, breaking SQL

    - by Dominic Rodger
    I'm using django-mptt to manage a simple CMS, with a model called Page, which looks like this (most presumably irrelevant fields removed): class Page(mptt.Model, BaseModel): title = models.CharField(max_length = 20) slug = AutoSlugField(populate_from = 'title') contents = models.TextField() parent = models.ForeignKey('self', null=True, blank=True, related_name='children', help_text = u'The page this page lives under.') removed fields are called attachments, headline_image, nav_override, and published All works fine using SQLite, but when I use MySQL and try and add a Page using the admin (or using ModelForms and the save() method), I get this: ProgrammingError at /admin/mycms/page/add/ (1110, "Column 'level' specified twice") where the SQL generated is: 'INSERT INTO `kaleo_page` (`title`, `slug`, `contents`, `nav_override`, `parent_id`, `published`, `headline_image_id`, `lft`, `rght`, `tree_id`, `level`, `lft`, `rght`, `tree_id`, `level`) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)' for some reason I'm getting the django-mptt fields (lft, rght, tree_id and level) twice. It works in SQLite presumably because SQLite is more forgiving about what it accepts than MySQL. get_all_field_names() also shows them twice: >>> Page._meta.get_all_field_names() ['attachments', 'children', 'contents', 'headline_image', 'id', 'level', 'lft', 'nav_override', 'parent', 'published', 'rght', 'slug', 'title', 'tree_id'] Which is presumably why the SQL is bad. What could I have done that would result in those fields appearing twice in get_all_field_names()?

    Read the article

  • Sharp architecture; Accessing Validation Results

    - by nabeelfarid
    I am exploring Sharp Architecture and I would like to know how to access the validation results after calling Entity.IsValid(). I have two scenarios e.g. 1) If the entity.IsValid() return false, I would like to add the errors to ModelState.AddModelError() collection in my controller. E.g. in the Northwind sample we have an EmployeesController.Create() action when we do employee.IsValid(), how can I get access to the errors? public ActionResult Create(Employee employee) { if (ViewData.ModelState.IsValid && employee.IsValid()) { employeeRepository.SaveOrUpdate(employee); } // .... } [I already know that when an Action method is called, modelbinder enforces validation rules(nhibernate validator attributes) as it parses incoming values and tries to assign them to the model object and if it can't parse the incoming values  then it register those as errors in modelstate for each model object property. But what if i have some custom validation. Thats why we do ModelState.IsValid first.] 2) In my test methods I would like to test the nhibernate validation rules as well. I can do entity.IsValid() but that only returns true/ false. I would like to Assert against the actual error not just true/ false. In my previous projects, I normally use a wrapper Service Layer for Repositories, and instead of calling Repositories method directly from controller, controllers call service layer methods which in turn call repository methods. In my Service Layer all my custom validation rules resides and Service Layer methods throws a custom exception with a NameValueCollection of errors which I can easily add to ModelState in my controller. This way I can also easily implement sophisticated business rules in my service layer as well. I kow sharp architecture also provides a Service Layer project. But what I am interested in and my next question is: How I can use NHibernate Vaidators to implement sophisticated custom business rules (not just null,empty, range etc.) and make Entity.IsValid() to verify those rules too ?

    Read the article

  • ServiceRoute + WebServiceHostFactory kills WSDL generation? How to create extensionless WCF service

    - by Ethan J. Brown
    I'm trying to use extenionless / .svc-less WCF services. Can anyone else confirm or deny the issue I'm experiencing? I use routing in code, and do this in Application_Start of global.asax.cs: RouteTable.Routes.Add(new ServiceRoute("Data", new WebServiceHostFactory(), typeof(DataDips))); I have tested in both IIS 6 and IIS 7.5 and I can use the service just fine (ie my extensionless handler is correctly configured for ASP.NET). However, metadata generation is totally screwed up. I can hit my /mex endpoint with the WCF Test Client (and I presume svcutil.exe) -- but the ?wsdl generation you typically get with .svc is toast. I can't hit it with a browser (get 400 bad request), I can't hit it with wsdl.exe, etc. Metadata generation is configured correctly in web.config. This is a problem of course, because the service is exposed as basicHttpBinding so that an old style ASMX client can get to it. But of course, the client can't generate the proxy without a WSDL description. If I instead use serviceActivation routing in config like this, rather than registering a route in code: <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <serviceActivations> <add relativeAddress="Data.svc" service="DataDips" /> </serviceActivations> </serviceHostingEnvironment> Then voila... it works. But then I don't have a clean extensionless url. If I change relativeAddress from Data.svc to Data, then I get a configuration exception as this is not supported by config. (Must use an extension registered to WCF). I've also attempted to use this code in conjunction with the above config: RouteTable.Routes.MapPageRoute("","Data/{*data}","~/Data.svc/{*data}",false); My thinking is that I can just point the extensionless url at the configured .svc url. This doesn't work -- the /Data.svc continues to work, but /Data returns a 404. Anyone with any bright ideas?

    Read the article

  • Disable Razors default .cshtml handler in a ASP.NET Web Application

    - by mythz
    Does anyone know how to disable the .cshtml extension completely from an ASP.NET Web Application? In essence I want to hijack the .cshtml extension and provide my own implementation based on a RazorEngine host, although when I try to access the page.cshtml directly it appears to be running under an existing WebPages razor host that I'm trying to disable. Note: it looks like its executing .cshtml pages under the System.Web.WebPages.Razor context as the Microsoft.Data Database is initialized. I don't even have any Mvc or WebPages dlls referenced, just System.Web.dll and a local copy of System.Web.Razor with RazorEngine.dll I've created a new ASP.NET Web .NET 4.0 Application and have tried to clear all buildProviders and handlers as seen below: <system.web> <httpModules> <clear/> </httpModules> <compilation debug="true" targetFramework="4.0"> <buildProviders> <clear/> </buildProviders> </compilation> <httpHandlers> <clear/> <add path="*" type="MyHandler" verb="*"/> </httpHandlers> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <clear/> </modules> <handlers> <clear/> <add path="*" name="MyHandler" type="MyHandler" verb="*" preCondition="integratedMode" resourceType="Unspecified" allowPathInfo="true" /> </handlers> </system.webServer> Although even with this, when I visit any page.cshtml page it still bypasses My wildcard handler and tries to execute the page itself. Basically I want to remove all traces of .cshtml handlers/buildProviders/preprocessing so I can serve the .cshtml pages myself, anyone know how I can do this?

    Read the article

  • Ubuntu 10.04 (Lucid) OpenLDAP invalid credentials issue

    - by gmuller
    This won't be a question, but a solution to an infuriating problem on Ubuntu 10.04. If you tried to deploy an LDAP server using this distro following the tutorials below, you'll be on serious trouble. Tutorials: https://help.ubuntu.com/9.10/serverguide/C/openldap-server.html https://help.ubuntu.com/9.10/serverguide/C/samba-ldap.html The error first appear, on the line: "ldapsearch -xLLL -b cn=config -D cn=admin,cn=config -W olcDatabase=hdb olcAccess" It simply won't allow admin to access the "cn=config", thus you won't be able to deploy the LDAP server correctly. After almost a week searching for a solution, I've found this page: https://bugs.launchpad.net/ubuntu-docs/+bug/333733 On comment #5, the solution is presented. Quoting the author: when you get to the setting up ACL part you all of a sudden need to use a cn=admin,cn=config, that doesn't exist creating a config.ldif with dn: olcDatabase={0}config,cn=config changetype: modify add: olcRootDN olcRootDN: cn=admin,cn=config dn: olcDatabase={0}config,cn=config changetype: modify add: olcRootPW olcRootPW: secret dn: olcDatabase={0}config,cn=config changetype: modify delete: olcAccess and adding it with ldapadd -Y EXTERNAL -H ldapi:/// -f config.ldif It's unacceptable that a Linux distribution, popular like Ubuntu, have such ridiculous bug. Hope it helps everyone!

    Read the article

  • Friend Assemblies in C#

    - by Tim Long
    I'm trying to create some 'friend assemblies' using the [InternalsVisibleTo()] attribute, but I can't seem to get it working. I've followed Microsoft's instructions for creating signed friend assemblies and I can't see where I'm going wrong. So I'll detail my steps here and hopefully someone can spot my deliberate mistake...? Create a strong name key and extract the public key, thus: sn -k StrongNameKey sn -p public.pk sn -tp public.pk Add the strong name key to each project and enable signing. Create a project called Internals and a class with an internal property: namespace Internals { internal class ClassWithInternals { internal string Message { get; set; } public ClassWithInternals(string m) { Message = m; } } } Create another project called TestInternalsVisibleTo: namespace TestInternalsVisibleTo { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { var c = new Internals.ClassWithInternals("Test"); Console.WriteLine(c.Message); } } } Edit the AssemblyInfo.cs file for the Internals project, and add teh necessary attribute: [assembly: AssemblyTitle("AssemblyWithInternals")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Microsoft")] [assembly: AssemblyProduct("Internals")] [assembly: AssemblyCopyright("Copyright © Microsoft 2010")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] [assembly: ComVisible(false)] [assembly: Guid("41c590dc-f555-48bc-8a94-10c0e7adfd9b")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] [assembly: InternalsVisibleTo("TestInternalsVisibleTo PublicKey=002400000480000094000000060200000024000052534131000400000100010087953126637ab27cb375fa917c35b23502c2994bb860cc2582d39912b73740d6b56912c169e4a702bedb471a859a33acbc8b79e1f103667e5075ad17dffda58988ceaf764613bd56fc8f909f43a1b177172bc4143c96cf987274873626abb650550977dcad1bb9bfa255056bb8d0a2ec5d35d6f8cb0a6065ec0639550c2334b9")] And finally... build! I get the following errors: error CS0122: 'Internals.ClassWithInternals' is inaccessible due to its protection level error CS1729: 'Internals.ClassWithInternals' does not contain a constructor that takes 1 arguments error CS1061: 'Internals.ClassWithInternals' does not contain a definition for 'Message' and no extension method 'Message' accepting a first argument of type 'Internals.ClassWithInternals' could be found (are you missing a using directive or an assembly reference?) Basically, it's as if I had not used the InternalsVisibleTo attrbute. Now, I'm not going to fall into the trap of blaming the tools, so what's up here? Anyone?

    Read the article

  • ASP.NET MVC - Parent-Child Table Relation - how to creat Children in MVC (example request)

    - by adudley
    Hi All. In a standard setup of Parent Child relation, lets say Project and Task. Where a Project is made up of lots of Tasks. So in a standard RDB, we have a Project (ID, Name, Deadline) Task (ID, FK_To_Project, Name, Description, isCompleted) this is all very straight forward. We have an MVC View that views Projects, so we get a nice list of all the project Names next to each deadline. Now we want to CREATE a new PROJECT. The Edit view opens, we type a name, say, 'Make a cup of Tea', with tomorrow as the deadline! Still in this view/web page, I would like a list of all the Child Tasks, in a standard list, with Edit, Delete, and a Create/Add Task button too, just below the 'parent table' details. The simplest way to describe this, is the Parents Table Create/Edit view, with the Childes List View Below it. 1) The ideal solution will also allow my Child Table (Tasks) to have Children also (for more complex scenarios) , and so on, and on, and on. 2) If I navigate away from my Created Project, I don’t want all sorts of random stuff laying around, they went away, it’s gone! 3) I’d expect all the same functionality when Editing an existing project. I’m struggling with the ‘Add New Child’, I had a model dialog (jquery) and all was well, but now when editing an existing child/task, I need to populate the Child Edit, which is a pain and will need loads of java script I think :( How can this be achieved in MVC, does anybody have any examples?

    Read the article

  • C# Lack of Static Inheritance - What Should I Do?

    - by yellowblood
    Alright, so as you probably know, static inheritance is impossible in C#. I understand that, however I'm stuck with the development of my program. I will try to make it as simple as possible. Lets say our code needs to manage objects that are presenting aircrafts in some airport. The requirements are as follows: There are members and methods that are shared for all aircrafts There are many types of aircrafts, each type may have its own extra methods and members. There can be many instances for each aircraft type. Every aircraft type must have a friendly name for this type, and more details about this type. For example a class named F16 will have a static member FriendlyName with the value of "Lockheed Martin F-16 Fighting Falcon". Other programmers should be able to add more aircrafts, although they must be enforced to create the same static details about the types of the aircrafts. In some GUI, there should be a way to let the user see the list of available types (with the details such as FriendlyName) and add or remove instances of the aircrafts, saved, lets say, to some XML file. So, basically, if I could enforce inherited classes to implement static members and methods, I would enforce the aircraft types to have static members such as FriendlyName. Sadly I cannot do that. So, what would be the best design for this scenario?

    Read the article

  • Invalid Cast Exception ASP.NET C#

    - by Shadow Scorpion
    I have a problem in this code: public static T[] GetExtras <T>(Type[] Types) { List<T> Res = new List<T>(); foreach (object Current in GetExtras(typeof(T), Types)) { Res.Add((T)Current);//this is the error } return Res.ToArray(); } public static object[] GetExtras(Type ExtraType, Type[] Types) { lock (ExtraType) { if (!ExtraType.IsInterface) return new object[] { }; List<object> Res = new List<object>(); bool found = false; found = (ExtraType == typeof(IExtra)); foreach (Type CurInterFace in ExtraType.GetInterfaces()) { if (found = (CurInterFace == typeof(IExtra))) break; } if (!found) return new object[] { }; foreach (Type CurType in Types) { found = false; if (!CurType.IsClass) continue; foreach (Type CurInterface in CurType.GetInterfaces()) { try { if (found = (CurInterface.FullName == ExtraType.FullName)) break; } catch { } } try { if (found) Res.Add(Activator.CreateInstance(CurType)); } catch { } } return Res.ToArray(); } } When I'm using this code in windows application it works! But I cant use it on ASP page. Why?

    Read the article

  • entity framework POCO template in a n-tiers design question

    - by bryan
    HI all I was trying to follow the POCO Template walkthrough . And now I am having problems using it in n-tiers design. By following the article, I put my edmx model, and the template generated context.tt in my DAL project, and moved the generated model.tt entity classes to my Business Logic layer (BLL) project. By doing this, I could use those entities inside my BLL without referencing the DAL, I guess that is the idea of PI; without knowing anything about the data source. Now, I want to extend the entities (inside the model.tt) to perform some CUD action in the BLL project,so I added a new partial class same name as the one generated from template, public partial class Company { public static IEnumerable AllCompanies() { using(var context = new Entities()){ var q = from p in context.Companies select p; return q.ToList(); } } } however visual studio won't let me do that, and I think it was because the context.tt is in the DAL project, and the BLL project could not add a reference to the DAL project as DAL has already reference to the BLL. So I tried to added this class to the DAL and it compiled, but intelisense won't show up the BLL.Company.AllCompanies() in my web service method from my webservice project which has reference to my BLL project. What should I do now? I want to add CUD methods to the template generated entities in my BLL project, and call them in my web services from another project. I have been looking for this answer a few days already, and I really need some guides from here please. Bryan

    Read the article

  • how to show a large jpg image to the right in jqGrid's edit form ?

    - by cLee
    Is it possible to show a large (i.e. bigger than a thumbnail) jpeg image in the right-hand side of jqGrid's edit form ? Users want to look at a photo while entering data into fields ... they are describing things in the photo. I'm sure all things are possible with jQuery, but I don't know where to begin. thanks ... html: function afterSubmit(r, data, action) { // if session timeout returned: if (r.responseText == "logout") { window.location = '../scripts/logout.php'; } // if an error message is returned: if (r.responseText != "") { $('#submit_errors').html('Alert:'+r.responseText+''); // show div with error message $('#submit_errors').slideDown(); // hide error div after 10 seconds window.setTimeout(function() { $('#submit_errors').slideUp(); }, 10000); return false; // don't remove this! } return true; // don't remove this! } var lastsel; jQuery(document).ready(function(){ var mygrid = jQuery("#mobile_incidents").jqGrid({ url:'list.php?q=e', editurl:'edit.php', datatype: "json", // note: all column names are required even though some columns are hidden colNames:['Rec#','Date','Line','Photo'], colModel:[{ name:'id', index:'id', editable:true, editoptions: {readonly:'readonly'} }, { name:'mobile_discoveryDate', index:'mobile_discoveryDate', sortable:false, editable:true, edittype:'text', formatter:'date', formatoptions:{ srcformat:'Y/m/d', newformat:'m/d/Y' }, editoptions:{ size:12, maxlength:10, dataInit: function(element) { $(element).blur(); $(element).datepicker({dateFormat:'mm/dd/yyyy'}) } } }, { name:'mobile_lineName', index:'mobile_lineName', editable:true, sortable:false}, { name:'mobile_photo_name', index:'mobile_photo_name', editable:false, sortable:false} ], pager: '#mobile_incidents_pager', altRows: false, rowNum:10, rowList:[10,20], imgpath: '../include/images/jqgrid', viewrecords: true, emptyrecords:'No submissions found!', height: 260, sortname: 'id', sortorder: 'desc', gridview: true, scrollrows: true, autowidth: true, rownumbers: false, multiselect: false, subGrid:false, caption: '' }) .navGrid('#mobile_incidents_pager', // params: {add:false, edit:true, del:false, search:false, view:false, refresh:true, alertcap:' to edit:', alerttext:' . . . click on a row to highlight' }, // edit params: {top:50, left:5, editCaption: 'Edit Submission', bSubmit: 'Approve/Save', closeAfterEdit:true, afterSubmit:function(r,data){return afterSubmit(r,data,'edit');} }, {}, // add params {}, // delete params // search params: {multipleSearch: false}, // view params: {top: 150, left: 5, caption: 'View Mobile Rail Submission'} ); });

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >