Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 22/423 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • What's the best approach when it comes to updating a production(on ec2) machine that can't go down?

    - by Ryan Detzel
    We have three main servers on ec2, web, database, and search. I logged in today to find: 77 packages can be updated. 45 updates are security updates. which scares the crap out of me so I want to update these machines asap but I'm scared to just run the updates on a live running system. Is this safe to do, what's the best approach when it comes to doing security updates on production machines?

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • Beginner MVC question - Correct approach to render out a List and details?

    - by fizzer
    I'm trying to set up a page where I display a list of items and the details of the selected item. I have it working but wonder whether I have followed the correct approach. I'll use customers as an example I have set the aspx page to inherit from an IEnumerable of Customers. This seems to be the standard approach to display the list of items. For the Details I have added a Customer user control which inherits from customer. I think i'm on the right track so far but I was a bit confused as to where I should store the id of the customer whose details I intend to display. I wanted to make the id optional in the controller action so that the page could be hit using "/customers" or "customers/1" so I made the arg optional and stored the id in the ViewData like this: public ActionResult Customers(string id = "0") { Models.DBContext db = new Models.DBContext(); var cList = db.Customers.OrderByDescending(c => c.CustomerNumber); if (id == "0") { ViewData["CustomerNumber"] = cList.First().CustomerNumber.ToString(); } else { ViewData["CustomerNumber"] = id; } return View("Customers", cList); } I then rendered the User control using RenderPartial in the front end: <%var CustomerList = from x in Model where x.CustomerNumber == Convert.ToInt32(ViewData["CustomerNumber"]) select x; Customer c = (Customer)CustomerList.First(); %> <% Html.RenderPartial("Customer",c); %> Then I just have an actionLink on each listed item: <%: Html.ActionLink("Select", "Customers", new { id = item.CustomerNumber })% It all seems to work but as MVC is new to me I would just be interested in others thoughts on whether this is a good approach?

    Read the article

  • How to get Augmented Reality: A Practical Guide examples working?

    - by Glen
    I recently bought the book: Augmented Reality: A Practical Guide (http://pragprog.com/titles/cfar/augmented-reality). It has example code that it says runs on Windows, MacOS and Linux. But I can't get the binaries to run. Has anyone got this book and got the binaries to run on ubuntu? I also can't figure out how to compile the examples in Ubuntu. How would I do this? Here is what it says to do: Compiling for Linux Refreshingly, there are no changes required to get the programs in this chapter to compile for Linux, but as with Windows, you’ll first have to find your GL and GLUT files. This may mean you’ll have to download the correct version of GLUT for your machine. You need to link in the GL, GLU, and GLUT libraries and provide a path to the GLUT header file and the files it includes. See whether there is a glut.h file in the /usr/include/GL directory; otherwise, look elsewhere for it—you could use the command find / -name "glut.h" to search your entire machine, or you could use the locate command (locate glut.h). You may need to customize the paths, but here is an example of the compile command: gcc -o opengl_template opengl_template.cpp -I /usr/include/GL -I /usr/include -lGL -lGLU -lglut gcc is a C/C++ compiler that should be present on your Linux or Unix machine. The -I /usr/include/GL command-line argument tells gcc to look in /usr/include/GL for the include files. In this case, you’ll find glut.h and what it includes. When linking in libraries with gcc, you use the -lX switch—where X is the name of your library and there is a correspond- ing libX.a file somewhere in your path. For this example, you want to link in the library files libGL.a, libGLU.a, and libglut.a, so you will use the gcc arguments -lGL -lGLU -lglut. These three files are found in the default directory /usr/lib/, so you don’t need to specify their location as you did with glut.h. If you did need to specify the library path, you would add -L to the path. To run your compiled program, type ./opengl_template or, if the current directory is in your shell’s paths, just opengl_template. When working in Linux, it’s important to know that you may need to keep your texture files to a maximum of 256 by 256 pixels or find the settings in your system to raise this limit. Often an OpenGL program will work in Windows but produce a blank white texture in Linux until the texture size is reduced. The above instructions make no sense to me. Do I have to use gcc to compile or can I use eclipse? If I use either eclipse or gcc what do I need to do to compile and run the program?

    Read the article

  • What approach should I take to export my iPhone contacts to Gmail?

    - by codeLes
    I'm on the iPhone OS3, I want to get my contacts on my phone in sync with my Google contacts, but I don't want to lose what is currently in my phone. So far I have been under the impression that by just turning the Google Sync on that it will overwrite the info on my phone. I don't currently have all the info on my phone in Google contacts so this would not be desired. Other than manually inputting all the info from my phone into my Google Contacts, what approach could I take to get that info on the Google cloud so that I can turn on sync without fear of losing any information?

    Read the article

  • Big smart ViewModels, dumb Views, and any model, the best MVVM approach?

    - by Edward Tanguay
    The following code is a refactoring of my previous MVVM approach (Fat Models, skinny ViewModels and dumb Views, the best MVVM approach?) in which I moved the logic and INotifyPropertyChanged implementation from the model back up into the ViewModel. This makes more sense, since as was pointed out, you often you have to use models that you either can't change or don't want to change and so your MVVM approach should be able to work with any model class as it happens to exist. This example still allows you to view the live data from your model in design mode in Visual Studio and Expression Blend which I think is significant since you could have a mock data store that the designer connects to which has e.g. the smallest and largest strings that the UI can possibly encounter so that he can adjust the design based on those extremes. Questions: I'm a bit surprised that I even have to "put a timer" in my ViewModel since it seems like that is a function of INotifyPropertyChanged, it seems redundant, but it was the only way I could get the XAML UI to constantly (once per second) reflect the state of my model. So it would be interesting to hear anyone who may have taken this approach if you encountered any disadvantages down the road, e.g. with threading or performance. The following code will work if you just copy the XAML and code behind into a new WPF project. XAML: <Window x:Class="TestMvvm73892.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestMvvm73892" Title="Window1" Height="300" Width="300"> <Window.Resources> <ObjectDataProvider x:Key="DataSourceCustomer" ObjectType="{x:Type local:CustomerViewModel}" MethodName="GetCustomerViewModel"/> </Window.Resources> <DockPanel DataContext="{StaticResource DataSourceCustomer}"> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}"/> <TextBlock Text=" "/> <TextBlock Text="{Binding Path=LastName}"/> </StackPanel> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=TimeOfMostRecentActivity}"/> </StackPanel> </DockPanel> </Window> Code Behind: using System; using System.Windows; using System.ComponentModel; using System.Threading; namespace TestMvvm73892 { public partial class Window1 : Window { public Window1() { InitializeComponent(); } } //view model public class CustomerViewModel : INotifyPropertyChanged { private string _firstName; private string _lastName; private DateTime _timeOfMostRecentActivity; private Timer _timer; public string FirstName { get { return _firstName; } set { _firstName = value; this.RaisePropertyChanged("FirstName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.RaisePropertyChanged("LastName"); } } public DateTime TimeOfMostRecentActivity { get { return _timeOfMostRecentActivity; } set { _timeOfMostRecentActivity = value; this.RaisePropertyChanged("TimeOfMostRecentActivity"); } } public CustomerViewModel() { _timer = new Timer(CheckForChangesInModel, null, 0, 1000); } private void CheckForChangesInModel(object state) { Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, this); } public static CustomerViewModel GetCustomerViewModel() { CustomerViewModel customerViewModel = new CustomerViewModel(); Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, customerViewModel); return customerViewModel; } public static void MapFieldsFromModeltoViewModel(Customer model, CustomerViewModel viewModel) { viewModel.FirstName = model.FirstName; viewModel.LastName = model.LastName; viewModel.TimeOfMostRecentActivity = model.TimeOfMostRecentActivity; } public static Customer GetCurrentCustomer() { return Customer.GetCurrentCustomer(); } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } } //model public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public DateTime TimeOfMostRecentActivity { get; set; } public static Customer GetCurrentCustomer() { return new Customer { FirstName = "Jim", LastName = "Smith", TimeOfMostRecentActivity = DateTime.Now }; } } }

    Read the article

  • If I prefer semantic naming then shouldn't i use any CSS Framework and grid approach?

    - by metal-gear-solid
    If I prefer semantic naming then shouldn't i use any CSS Framework and grid approach? Which approach is better Grid or Freehand? Is any CSS Frameworks really can save time and make semantic code even for Experienced CSS developer? Many CSS Frameworksd are popular in SIMPLE PSD 2 XHTML CSS and in wordpress/drupal/joomla theme development. Can we make CSS XHTML development as faster as with CSS frameworks but without using CSS Grid Frameworks? What makes development with CSS frameworks faster which we can't do without frameworks?

    Read the article

  • Visual Studio: What approach do you use to 'template' plumbing for similiar projects?

    - by Sosh
    When building ASP.NET projects there is a certain amount of boilerplate, or plumbing that needs to be done, which is often identical across projects. This is especially the case with MVC and ALT.NET approaches. [I'm thinking of things such as: IoC, ORM, Solution structure (projects), Session Management, User Management, I18n etc.] I would like to know what approach you find best for 'reusing' this plumbing accross projects? Have a 'master solution' which you duplicate and rename somehow? (I'm using a this to a degree at the moment, but it's fairly messy. Would be interested how people do this 'better') Mainly rely on Shared Library projects? (I find this appropriate for some things, but too restrictive for things that have to be customised) Code generation tools, such as T4? (Similar to the approach used by SharpArchitecture - have not tried this myself) Something else? Thanks for sharing your experiences!

    Read the article

  • What is the better approach to find if a given set is a perfect subset of a set - If given subset is

    - by Microkernel
    Hi guys, What is the best approach to find if a given set(unsorted) is a perfect subset of a main set. I got to do some validation in my program where I got to compare the clients request set with the registered internal capability set. I thought of doing by having internal capability set sorted(will not change once registered) and do Binary search for each element in the client's request set. Is it the best I could get? I suspected that there might be better approach. Any idea? Regards, Microkernel

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Flushing stdin after every input - which approach is not buggy?

    - by Aseem Bansal
    While using scanf() in C there is always the problem of extra input lying in the input buffer. So I was looking for a function that I call after every scanf call to remedy this problem. I used this, this, this and this to get these answers //First approach scanf("%*[^\n]\n"); //2ndapproach scanf("%*[^\n]%*c"); //3rd approach int c; while((c = getchar()) != '\n' && c != EOF) /* discard */ ; All three are working as far as I could find by hit-and-trial and going by the references. But before using any of these in all of my codes I wanted to know whether any of these have any bugs?

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Is a Multi-DAL Approach the way to go here?

    - by Krisc
    Working on the data access / model layer in this little MVC2 project and trying to think things out to future projects. I have a database with some basic tables and I have classes in the model layer that represent them. I obviously need something to connect the two. The easiest is to provide some sort of 'provider' that can run operations on the database and return objects. But this is for a website that would potentially be used "a lot" (I know, very general) so I want to cache results from the data layer and keep the cache updated as new data is generated. This question deals with how best to approach this problem of dual DALS. One that returns cached data when possible and goes to the data layer when there is a cache miss. But more importantly, how to integrate the core provider (thing that goes into database) with the caching layer so that it too can rely on cached objects rather than creating new ones. Right now I have the following interfaces: IDataProvider is used to reach the database. It doesn't concern itself with the meaning of the objects it produces, but simply the way to produce them. interface IDataProvider{ // Select, Update, Create, et cetera access IEnumerable<Entry> GetEntries(); Entry GetEntryById(int id); } IDataManager is a layer that sits on top of the IDataProvider layer and manages the cache interface IDataManager : IDataProvider{ void ClearCache(); } Note that in practice the IDataManager implementation will have useful helper functions to add objects to their related cache stores. (In the future I may define other functions on the interface) I guess what I am looking for is the best way to approach a loop back from the IDataProvider implementations so that they can access the cache. Or a different approach entirely may be in order? I am not very interested in 3rd party products at the moment as I am interested in the design of these things much more than this specific implementation. Edit: I realize the title may be a bit misleading. I apologize for that... not sure what to call this question.

    Read the article

  • Forced to use too many hidden fields; looking for an alternative approach

    - by harisri786
    I am looking for a better approach to do this. I have around 70 to 80 hidden fields in my page. This hidden fields are initialized at the server side and then used at the client side for validations, calculations, etc,. using java script. I wanted to know if there is any other alternative approach to using hidden fields in asp.net. I guess, these many hidden fields are increasing the page size and hence affecting the performance of my web page and I want to do away with it. FYI: I am working on an asp.net web application.

    Read the article

  • Is this a good approach to execute a list of operations on a data structure in Python?

    - by Sridhar Iyer
    I have a dictionary of data, the key is the file name and the value is another dictionary of its attribute values. Now I'd like to pass this data structure to various functions, each of which runs some test on the attribute and returns True/False. One approach would be to call each function one by one explicitly from the main code. However I can do something like this: #MYmodule.py class Mymodule: def MYfunc1(self): ... def MYfunc2(self): ... #main.py import Mymodule ... #fill the data structure ... #Now call all the functions in Mymodule one by one for funcs in dir(Mymodule): if funcs[:2]=='MY': result=Mymodule.__dict__.get(funcs)(dataStructure) The advantage of this approach is that implementation of main class needn't change when I add more logic/tests to MYmodule. Is this a good way to solve the problem at hand? Are there better alternatives to this solution?

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook - Book and eBook expected June 2011. Pre-order now!

    - by ssqa.net
    Over 85 practical recipes for administering a high-performance SQL Server 2008 R2 system. Book and eBook expected June 2011 . Pre-order now! Multi-format orders get free access on PacktLib , This practical cookbook will show you the advanced administration techniques for managing and administering a scalable and high-performance SQL Server 2008 R2 system. It contains over 85 practical, task-based, and immediately useable recipes covering a wide range of advanced administration techniques for administering...(read more)

    Read the article

  • jqGrid : "All in One" approach width jqGridEdit Class > how to set a composite primary key ?

    - by Qualliarys
    Hello, How to set a composite primary key for a "All in One" approach (grid defined in JS file, and data using jqGridEdit Class in php file) ? Please, for me a composite primary key of a table T, is a elementary primary key that is defined with some fields belong to this table T ! Here is my test, but i get no data and cannot use the CRUD operations : In my JS file i have this lines code: ... colModel":[ {"name":"index","index":"index","label":"index"}, // <= THAT'S JUST THE INDEX OF MY TABLE {"name":"user","index":"user","label":"user","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY {"name":"pwd","index":"pwd","label":"pwd","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY {"name":"state","index":"state","label":"state","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY ... <= AND SO ON "url":"mygrid_crud.php", "datatype":"json", "jsonReader":{repeatitems:false}, "editurl": "mygrid_crud.php", "prmNames":{"id":"index"} // <= WHAT I NEED TO WRITE HERE ??? ... In my php file (mygrid_crud.php) : ... $grid = new jqGridEdit($conn); $query = "SELECT * FROM mytable WHERE user='$user' and pwd='$pwd' and state='$state'..."; // <= SELECT * it's ok or i need to specify all fields i need ? $grid->SelectCommand = $query; $grid->dataType = "json"; $grid->table = 'mytable'; $grid->setPrimaryKeyId('index'); // <= WHAT I NEED TO WRITE HERE ??? ... $grid->editGrid(); Please, say me what is wrong, and how to do to set a composite primary key in this approach !? Thank you so much for tour responses.

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • c# - what approach can I use to extend a group of classes that include implemented methods? (see des

    - by Greg
    Hi, I want to create an extendible package I am writing that has Topology, Node & Relationship classes. The idea is these base classes would have the various methods in them necessary to base graph traversal methods etc. I would then like to be able to reuse this by extending the package. For example the base requirements might see Relationship with a parentNode & childNode. Topology would have a List of Nodes and List of Relationships. Topology would have methods like FindChildren(int depth). Then the usage would be to extend these such that additional attributes for Node and Relationships could be added etc. QUESTION - What would be the best approach to package & expose the base level classes/methods? (it's kind of like a custom collection but with multiple facets). Would the following concepts come into play: Interfaces - would this be a good idea to have ITopology, INode etc, or is this not required as the user would extend these classes anyway? Abstract Classes - would the base classes be abstract classes Custom Generic Collection - would some approach using this concept assist (but how would this work if there are the 3 different classes) thanks

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >