Search Results

Search found 1076 results on 44 pages for 'simplify'.

Page 29/44 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • weak or strong for IBOutlet and other

    - by Piero
    I have switched my project to ARC, and I don't understand if I have to use strong or weak for IBOutlets. Xcode do this: in interface builder, if a create a UILabel for example and I connect it with assistant editor to my ViewController, it create this: @property (nonatomic, strong) UILabel *aLabel; It uses the strong, instead I read a tutorial on RayWenderlich website that say this: But for these two particular properties I have other plans. Instead of strong, we will declare them as weak. @property (nonatomic, weak) IBOutlet UITableView *tableView; @property (nonatomic, weak) IBOutlet UISearchBar *searchBar; Weak is the recommended relationship for all outlet properties. These view objects are already part of the view controller’s view hierarchy and don’t need to be retained elsewhere. The big advantage of declaring your outlets weak is that it saves you time writing the viewDidUnload method. Currently our viewDidUnload looks like this: - (void)viewDidUnload { [super viewDidUnload]; self.tableView = nil; self.searchBar = nil; soundEffect = nil; } You can now simplify it to the following: - (void)viewDidUnload { [super viewDidUnload]; soundEffect = nil; } So use weak, instead of the strong, and remove the set to nil in the videDidUnload, instead Xcode use the strong, and use the self... = nil in the viewDidUnload. My question is: when do I have to use strong, and when weak? I want also use for deployment target iOS 4, so when do I have to use the unsafe_unretain? Anyone can help to explain me well with a small tutorial, when use strong, weak and unsafe_unretain with ARC?

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Extracting bool from istream in a templated function

    - by Thomas Matthews
    I'm converting my fields class read functions into one template function. I have field classes for int, unsigned int, long, and unsigned long. These all use the same method for extracting a value from an istringstream (only the types change): template <typename Value_Type> Value_Type Extract_Value(const std::string& input_string) { std::istringstream m_string_stream; m_string_stream.str(input_string); m_string_stream.clear(); m_string_stream >> value; return; } The tricky part is with the bool (Boolean) type. There are many textual representations for Boolean: 0, 1, T, F, TRUE, FALSE, and all the case insensitive combinations Here's the questions: What does the C++ standard say are valid data to extract a bool, using the stream extraction operator? Since Boolean can be represented by text, does this involve locales? Is this platform dependent? I would like to simplify my code by not writing my own handler for bool input. I am using MS Visual Studio 2008 (version 9), C++, and Windows XP and Vista.

    Read the article

  • Some general C questions.

    - by b-gen-jack-o-neill
    Hello. I am trying to fully understand the process pro writing code in some language to execution by OS. In my case, the language would be C and the OS would be Windows. So far, I read many different articles, but I am not sure, whether I understand the process right, and I would like to ask you if you know some good articles on some subjects I couldn´t find. So, what I think I know about C (and basically other languages): C compiler itself handles only data types, basic math operations, pointers operations, and work with functions. By work with functions I mean how to pass argument to it, and how to get output from function. During compilation, function call is replaced by passing arguments to stack, and than if function is not inline, its call is replaced by some symbol for linker. Linker than find the function definition, and replace the symbol to jump adress to that function (and of course than jump back to program). If the above is generally true and I get it right, where to final .exe file actually linker saves the functions? After the main() function? And what creates the .exe header? Compiler or Linker? Now, additional capabilities of C, today known as C standart library is set of functions and the declarations of them, that other programmers wrote to extend and simplify use of C language. But these functions like printf() were (or could be?) written in different language, or assembler. And there comes my next question, can be, for example printf() function be written in pure C without use of assembler? I know this is quite big question, but I just mostly want to know, wheather I am right or not. And trust me, I read a lots of articles on the web, and I would not ask you, If I could find these infromation together on one place, in one article. Insted I must piece by piece gather informations, so I am not sure if I am right. Thanks.

    Read the article

  • Can Django admin handle a one-to-many relationship via related_name?

    - by Mat
    The Django admin happily supports many-to-one and many-to-many relationships through an HTML <SELECT> form field, allowing selection of one or many options respectively. There's even a nice Javascript filter_horizontal widget to help. I'm trying to do the same from the one-to-many side through related_name. I don't see how it's much different from many-to-many as far as displaying it in the form is concerned, I just need a multi-select SELECT list. But I cannot simply add the related_name value to my ModelAdmin-derived field list. Does Django support one-to-many fields in this way? My Django model something like this (contrived to simplify the example): class Person(models.Model): ... manager = models.ForeignKey('self', related_name='staff', null=True, blank=True, ) From the Person admin page, I can easily get a <SELECT> list showing all possible staff to choose this person's manager from. I also want to display a multiple-selection <SELECT> list of all the manager's staff. I don't want to use inlines, as I don't want to edit the subordinates details; I do want to be able to add/remove people from the list. (I'm trying to use django-ajax-selects to replace the SELECT widget, but that's by-the-by.)

    Read the article

  • Parsing a complicated array with GetJSON Jquery

    - by Ozaki
    TLDR: Started with this question simplified it after got some of it working and continuing it here. I need to 'GET' the JSON array Format it correctly and for each within the array place it in the corresponding DIV. ?? It works. This is a followup from this question to simplify and continue. I need to some complicated JSON data from an array. With it I need the title and it's value. The reason why I am doing this is because I will know what the array is called but not the data that is being generated inside it. Lets say this new array looks as follows: {"Days": ["Sunday":"10.00", "Monday":"12.00", "Tuesday":"09.00", "Wednesday":"10.00", "Thursday":"02.00", "Friday":"05.00", "Saturday":"08.00"] } I would need it to get Sunday & 10.00 as well as the same for all of the others. My code to parse this at the moment is as follows: $.getJSON(url, function(data){ $.each(data, function(i,item){ $('.testfield').append('<p>' + item + '</p>'); }); }); Without the added times on each date it will parse the data as follows: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday With the times added to the days in the array Firebug no longer recognizes it as a JSON string. So I am guessing I have formatted something wrong here. Also, I need each day & time to be on a new line I thought that $('.testfield').append('<p>' + item + '</p>' + '<br />'); Would have applied each one to a new line but that did not work. How do I get each day or item to be on a new line? How do I get the $getjson to parse the data correctly day and its value, into a div?

    Read the article

  • Fulltext search on many tables

    - by Rob
    I have three tables, all of which have a column with a fulltext index. The user will enter search terms into a single text box, and then all three tables will be searched. This is better explained with an example: documents doc_id name FULLTEXT table2 id doc_id a_field FULLTEXT table3 id doc_id another_field FULLTEXT (I realise this looks stupid but that's because I've removed all the other fields and tables to simplify it). So basically I want to do a fulltext search on name, a_field and another_field, and then show the results as a list of documents, preferably with what caused that document to be found, e.g. if another_field matched, I would display what another_field is. I began working on a system whereby three fulltext search queries are performed and the results inserted into a table with a structure like: search_results table_name row_id score (This could later be made to cache results for a few days with e.g. a hash of the search terms). This idea has two problems. The first is that the same document can be in the search results up to three times with different scores. Instead of that, if the search term is matched in two tables, it should have one result, but a higher score. The second is that parsing the results is difficult. I want to display a list of documents, but I don't immediately know the doc_id without a join of some kind; however the table to join to is dependant on the table_name column, and I'm not sure how to accomplish that. Wanting to search multiple related tables like this must be a common thing, so I guess what I'm asking is am I approaching this in the right way? Can someone tell me the best way of doing it please.

    Read the article

  • Java - Save video stream from Socket to File

    - by Alex
    I use my Android application for streaming video from phone camera to my PC Server and need to save them into file on HDD. So, file created and stream successfully saved, but the resulting file can not play with any video player (GOM, KMP, Windows Media Player, VLC etc.) - no picture, no sound, only playback errors. I tested my Android application into phone and may say that in this instance captured video successfully stored on phone SD card and after transfer it to PC played witout errors, so, my code is correct. In the end, I realized that the problem in the video container: data streamed from phone in MP4 format and stored in *.mp4 files on PC, and in this case, file may be incorrect for playback with video players. Can anyone suggest how to correctly save streaming video to a file? There is my code that process and store stream data (without errors handling to simplify): // getOutputMediaFile() returns a new File object DataInputStream in = new DataInputStream (server.getInputStream()); FileOutputStream videoFile = new FileOutputStream(getOutputMediaFile()); int len; byte buffer[] = new byte[8192]; while((len = in.read(buffer)) != -1) { videoFile.write(buffer, 0, len); } videoFile.close(); server.close(); Also, I would appreciate if someone will talk about the possible "pitfalls" in dealing with the conservation of media streams. Thank you, I hope for your help! Alex.

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • Write binary data as a response in an ASP.NET MVC web control

    - by Lou Franco
    I am trying to get a control that I wrote for ASP.NET to work in an ASP.NET MVC environment. Normally, the control does the normal thing, and that works fine Sometimes it needs to respond by clearing the response, writing an image to the response stream and changing the content type. When you do this, you get an exception "OutputStream is not available when a custom TextWriter is used". If I were a page or controller, I see how I can create custom responses with binary data, but I can't see how to do this from inside a control's render functions. To simplify it -- imagine I want to make a web control that renders to: <img src="pageThatControlIsOn?controlImage"> And I know how to look at incoming requests and recognize query strings that should be routed to the control. Now the control is supposed to respond with a generated image (content-type: image/png -- and the encoded image). In ASP.NET, we: Response.Clear(); Response.OutputStream.Write(thePngData); // this throws in MVC // set the content type, etc Response.End(); How am I supposed to do that in an ASP.NET MVC control?

    Read the article

  • Design to distribute work when generating task oriented input for legacy dos application?

    - by TheDeeno
    I'm attempting to automate a really old dos application. I've decided the best way to do this is via input redirection. The legacy app (menu driven) has many tasks within tasks with branching logic. In order to easily understand and reuse the input for these tasks, I'd like to break them into bit size pieces. Since I'll need to start a fresh app on each run, repeating a context to consume a bit might be messy. I'd like to create an object model that: allows me to concentrate on the task at hand allows me to reuse common tasks from different start points prevents me from calling a task from the wrong start point To be more explicit, given I have the following task hierarchy: START A A1 A1a A1b A2 A2a B B1 B1a I'd like an object model that lets me generate an input file for task "A1b" buy using building blocks like: START -> do_A, do_A1, do_A1b but prevents me from: START -> do_A1 // because I'm assuming a different call chain from above This will help me write "do_A1b" because I can always assume the same starting context and will simplify writing "do_A1a" because it has THE SAME starting context. What patterns will help me out here? I'm using ruby at the moment so if dynamic language features can help, I'm game.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • c# Most efficient way to combine two objects

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Wordpress Custom Posts

    - by codedude
    I'm having a serious problem with custom post types in Wordpress. I made a post type called "Sermons". I then add a meta box with some text fields and echo out the results onto the web page. But here's my problem. The first time when you add a "Sermon", it works fine and the meta box fields output correctly. However, when I try to edit one of the meta boxes and do not edit the others, (say after I closed the web browser I remembered that I needed to add something to the fields,) the fields that were not edited become blank and the content in them is erased...not good at all. So, just to simplify this: the first time the meta boxes are filled they work fine. However, when editing the post for the second time, the fields that are not filled out, but left as they were, become blank upon saving the post. Help...I'm not too much of a developer so I'm not exactly sure how to fix this...(it was hard enough getting the meta fields to work.) (If you want the actual code used, please tell me and I will add is somewhere.

    Read the article

  • C# Reflection StackTrace get value

    - by John
    I'm making pretty heavy use of reflection in my current project to greatly simplify communication between my controllers and the wcf services. What I want to do now is to obtain a value from the Session within an object that has no direct access to HttpSessionStateBase (IE: Not a controller). For example, a ViewModel. I could pass it in or pass a reference to it etc. but that is not optimal in my situation. Since everything comes from a Controller at some point in my scenario I can do the following to walk the sack to the controller where the call originated, pretty simple stuff: var trace = new System.Diagnostics.StackTrace(); foreach (var frame in trace.GetFrames()) { var type = frame.GetMethod().DeclaringType; var prop = type.GetProperty("Session"); if(prop != null) { // not sure about this part... var value = prop.GetValue(type, null); break; } } The trouble here is that I can't seem to work out how to get the "instance" of the controller or the Session property so that I can read from it.

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • Mercurial Hg Clone fails on C# project with GUID

    - by AnneTheAgile
    UPDATE: In trying to replicate this problem one more time to answer your questions I could not! I can only conclude that my initial setup of Mercurial was problematic and/or possibly I was trying to checkin a build that failed compilation before the checkin. Sigh! Thank you so very much for your help. I gave credit for the help on how to do a script. I need to try that for general purposes. hi all, I hope you can help me :). I am trying to see if Mercurial would be a good DCVS for my project at work, and I'm surely a newbie to many things. We have a fairly large codebase in C# (Dotnet3.0 not 3.5 , WindowsXP) and it utilizes the GUID feature. I confess to know little about how or why we use the GUID, but I do know that I cannot touch it. So, when I try hg clone, it fails unless I change the GUID in the cloned directory (ie create new GUID in Visual Studio and then paste that new GUID to replace the old one). To me, this completely defeats the purpose and utility of quick easy clones. It also makes difficult all the many workflows that require multiple clones. Is there a workaround, or is there something I'm doing wrong? How can I simplify and/or remove this problem? Would Bazaar make this easier? Thank you!

    Read the article

  • Jquery getJSON to external PHP page

    - by Pmarcoen
    I've been trying to make an ajax request to an external server. I've learned so far that I need to use getJSON to do this because of security reasons ? Now, I can't seem to make a simple call to an external page. I've tried to simplify it down as much as I can but it's still not working. I have 2 files, test.html & test.php my test.html makes a call like this, to localhost for testing : $.getJSON("http://localhost/OutVoice/services/test.php", function(json){ alert("JSON Data: " + json); }); and I want my test.php to return a simple 'test' : $results = "test"; echo json_encode($results); I'm probably making some incredible rookie mistake but I can't seem to figure it out. Also, if this works, how can I send data to my test.php page, like you would do like test.php?id=15 ? The test.html page is calling the test.php page on localhost, same directory I don't get any errors, just no alert ..

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • How to stop my firefox extension which interferes other extension?

    - by ccppjava
    Hi, I have tried very hard to make my extension as simple as possible, it now do not contain any skin/css, it just have 'statusbar' in one single 'overlay'. The issue is that when installed, it hides the top three icon of 'all-in-one toolbar' extension of my firefox 3.6.3. On other two machine which do not have 'all-in-one toolbar', it hide all the icons of the web-development toolbar! chrome.manifest content stackoverflow content/ content stackoverflow content/ contentaccessible=yes overlay chrome://browser/content/browser.xul chrome://stackoverflow/content/browser.xul locale stackoverflow en-US locale/en-US/ browser.xul <overlay id="dch-browser-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"/> <script src="stackoverflow.js" /> <statusbar id="status-bar"> <statusbarpanel id="stackoverflow-status-bar-icon" class="statusbarpanel-iconic" src="chrome://stackoverflow/content/icon_small.png" tooltiptext="&runstackoverflow;" onclick="stackoverflow.run()" /> </statusbar> </overlay> I have tried very hard to simplify the extension to find the reason, but failed, any suggestion/ideas would be welcome. thx.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Internal bug tracking tickets - Redmine, Trac, or JIRA

    - by Tai Squared
    I've been looking at setting up Redmine, Trac, or JIRA to track issues. I want to be able to have my development team create internal tickets that are never seen by clients, while clients can create/edit tickets that are seen by the internal team. From the Trac documentation, you can set permissions to create or view tickets, but it doesn't seem to allow for viewing only certain tickets. It may be possible with Trac Fine Grained Permissions, but doesn't appear so. The Redmine documentation mentions: Define your own roles and set their permissions in a click but doesn't appear to have the level of granularity. From the JIRA documentation: At the moment JIRA is only able to support security at a project level or issue level. Currently there is no field level security available. According to this question, Redmine doesn't support internal tickets, so you would have to use multiple projects. I don't want a situation where I would have to create multiple projects - one internal, one external and have the external tickets brought into the internal repository. It seems as this would lead to unnecessary overhead and inevitably, the projects wouldn't be in sync. Is there any way with any of these products (possibly through a plug-in if not in the core product itself) to specify these permissions, or simplify having two projects with different users and permissions that must still share information?

    Read the article

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >