01:00:17 #startmeeting ubuntu-friendly 01:00:17 Meeting started Wed Feb 6 01:00:17 2013 UTC. The chair is cprofitt. Information about MeetBot at http://wiki.ubuntu.com/meetingology. 01:00:17 01:00:17 Available commands: #accept #accepted #action #agree #agreed #chair #commands #endmeeting #endvote #halp #help #idea #info #link #lurk #meetingname #meetingtopic #nick #progress #rejected #replay #restrictlogs #save #startmeeting #subtopic #topic #unchair #undo #unlurk #vote #voters #votesrequired 01:00:35 #meetingtopic welcome back 01:00:46 hello all -- and welcome to the Ubuntu Friendly meeting 01:01:00 can everyone that is here for the meeting just let me know you are in the room 01:02:09 o/ 01:02:12 o/ 01:02:40 thanks for coming everyone 01:02:51 our agenda is: 01:02:59 o/ 01:03:15 hexr, wiki cleanup, how to improve parts of the Ubuntu Friendly system 01:03:24 your questions and any advice/ideas you all have 01:03:49 baloons are you here? 01:03:56 #link https://wiki.ubuntu.com/UbuntuFriendly/Meetings 01:04:29 alright... I might have baloons take over if he comes in, but let me tell you a bit about hexr 01:04:51 #meetingtopic hexr 01:05:12 #link https://launchpad.net/hexr 01:06:30 hexr, from my understanding, will potentially make use of the same data Ubuntu Friendly does, but it will do so to see what harware components have been tested 01:06:50 so it will seek to find out of a specific Nvidia card has been tested or a specific wireless card 01:07:06 and not look at systems, but components 01:07:21 #link https://launchpad.net/~hexr-dev/+members#active 01:07:36 the developers are listed on that page 01:08:24 I think knowing that could help Ubuntu Friendly add some more functionality to the site -- where people might be able to sort systems based on a desired video card or wireless card... 01:08:43 for now it is just something for us to know as we move forward 01:09:22 #topic wiki clean up 01:09:41 how many of you have had time to look at the wiki page for UF? 01:09:58 #link https://wiki.ubuntu.com/UbuntuFriendly/ 01:10:53 would anyone be willing to take on reviewing the pages to ensure that they are up-to-date? 01:11:00 cprofitt, we can put some icons 01:11:04 cprofitt, I can do it 01:11:30 #action SergioMeneses will review the Ubuntu Friendly wiki 01:11:30 * meetingology SergioMeneses will review the Ubuntu Friendly wiki 01:11:34 thanks SergioMeneses 01:11:45 please feel free to ask for help from others as you do that 01:12:05 if anyone else knows of pages that need updates please let SergioMeneses or me know 01:12:14 cprofitt, ok 01:12:26 ok =) 01:12:36 #topic improvements 01:13:12 I know from personal experience we have some things to improve moving forward 01:13:21 phillw: did you want to discuss some of the thoughts you had? 01:13:56 anyone else? 01:14:28 #subtopic submissions 01:14:34 cprofitt: just to mention that you and laptop team are quite similar and I hope there is a good flow of informations between you both :) 01:14:47 phillw: I would hope so as well. 01:15:01 One of the things that got me involved was submissions 01:15:29 right now it is my understanding that submissions are not getting in to UF 01:15:35 but I am not sure why that is 01:15:40 phillw: hm, I thought the laptop testing team was for testing Ubuntu on laptops for upcoming releases, not reporting on how well supported laptops are in current releases 01:16:31 Im totally agree with pleia2, 01:17:10 pleia2: ergo, one a release is out, that information is available to what is then the 'current' release? 01:17:21 *once* 01:18:00 I would like to see two things done to improve in regards to submissions 01:18:05 phillw: but it would be results for when it was in development, so maybe the graphics card for that one laptop finally did end up working once the release happened, the results would be from when it wasn't working :\ 01:18:15 for example 01:18:39 i see, I'll have a chat with the laptop team :) 01:18:49 pleia2: I agree.. the results from pre-release could would not be invluded in Ubuntu Friendly 01:19:19 they're both valueable and needed projects, for sure, I just thing they provide different data to different audiences 01:19:25 s/thing/think 01:19:28 but the laptop testing team could, upon release, be a group of people who ran the tests for UF 01:19:36 cprofitt: indeed! 01:20:05 and from my understanding the database we are pulling from might actually be the same -- but with different criteria for what information is included 01:20:13 and also differences in how it gets displayed 01:20:21 it makes sense for both projects to test the same things 01:20:27 but I would need one of the developers to give me more information on that 01:20:52 I would like to see two things done to improve in regards to submissions 01:21:19 1. some system be put in place to make it easy for those submitting results to know that the results were processed 01:21:31 if the result was invalid - let them know why 01:22:05 as a person who has submitted results and never saw my results in UF I know how that can cause someone to lose interest in contributing 01:22:52 +1 01:23:13 2. I would also want to have a group of people that could look at invalid results to ensure that how the data is being processed is accurate -- that the automated system is not makring valid results as invalid 01:23:30 this is particularly important as hardware changes... 01:23:52 failure to test a card reader when there is no such device present should not invalidate results 01:24:04 any other comments in regards to submissions? 01:24:58 #subtopic checkbox 01:25:10 that brings me to one more idea 01:25:24 this one is not mine, but one that has been discussed in the qa channel 01:25:52 there may be a need to refine checkbox a bit to allow for a 'simple' test 01:26:06 or in the case of hexr a test of specific hardware 01:26:36 the simple test would be an abbreviated test to make sure that basic things like wireless work etc. 01:27:07 I also had cases when I tested where the automated test 'failed' itself, but I knew the device was working 01:27:39 I think there should be some sort of mechanism for making a note of that issue built in to checkbox so that changes can be addressed with those issues 01:28:03 having devices 'fail' a test when they are actually working undercuts people's faith in the process and the results. 01:28:20 any thoughts in that area? 01:28:32 o/ 01:28:43 yes, roadmr 01:29:02 hello! first, if an automated test fails when the device is working, that means a bug that we'd like to get fixed 01:29:21 I understand this adds a lot of friction to things, so maybe thinking of ways to make this more effortless may be worthwhile 01:29:31 * cprofitt nods 01:29:42 second, the automated tests are in general more reliable but they are also quite valuable in high-volume environments 01:30:07 if we assume that people can be trusted, we can potentially use manual test cases instead of automated ones that are most unreliable 01:30:30 even so, it would be good to collect some information that can be automatically analyzed, maybe server-side 01:30:46 this way we can potentially re-score a submission given the raw data, even after-the-fact 01:30:51 admittedly there is some trust to an extent already in the testing (you can hear a sound, etc) 01:30:59 I too, would prefer to get the automated tests working... I think human interaction always introduces some potential for error 01:31:08 this would however introduce a "we're collecting stuff and sending it up to canonical" factor that some people don't like 01:31:19 I do like rescoring a submission after the fact 01:31:33 so this is an area for improvement but we always have the option of doing this manually 01:31:47 roadmr: in this case I think the 'collecting data' part is voluntary 01:31:54 at least with checkbox 01:32:15 cprofitt: yes, well in order to determine what's going on, some extensive logs would need to be collected 01:32:22 * cprofitt nods 01:32:30 cprofitt: but keeping the user in control would be the best 01:32:48 pleia2: hehe, we sought to remove the human factor from that by adding an automated audio_test :) so the computer listens to itself 01:32:56 could that be an optional part of checkbox -- that a user would opt-in to collecting additional data? 01:32:57 anyway, those are my ideas on this 01:33:32 roadmr: does it work? I guess I haven't run it in a bit :) 01:33:35 I appreciate the feedback and information roadmr 01:33:35 cprofitt: sure, with the current checkbox architecture it feels a bit complex to do, but we can always change that, we should bend the tool to the needs, not the other way around 01:33:51 pleia2: as long as you have a microphone and speakers, it works and it's very reliable 01:33:59 nice 01:34:15 roadmr: +1 we want to make it easy for people to collect data for this type of use -- improving user experience 01:34:58 roadmr: do you have any insight as to why results are not getting included in to Ubuntu Friendly or is that outside of your scope? 01:35:42 cprofitt: I have a pretty good idea why, yes 01:35:52 could you go a bit in to that for us? 01:36:13 cprofitt: two components are involved in this, the first is checkbox, which runs the tests, assembles a report and pushes that to launchpad 01:36:45 the second is the UF website/application, which pulls the reports (submissions) from launchpad and adds the results to the friendly database (what you see in the website) - does the scoring, aggregation and such 01:37:14 for a submission to appear at all, it needs to contain (and pass) all tests identified as "core" 01:37:30 the basics for a computer to be usable, if any core tests are failed the submission will not appear in UF 01:37:52 if it passes the "core" tests, it gets 3 stars, and any additional tests it passes increase its rating, up to 5 stars 01:38:06 a 5-star rating means that all the components we test work perfectly 01:38:14 so why would a submission not appear? 01:38:39 some of the core tests are automated, and checkbox is not very good at 1) telling you one of them failed, and 2) letting you rerun/amend the results 01:39:19 checkbox itself has no idea of core tests, so it can't tell you "hey, your card reader didn't work and that means you won't make it into UF - please rerun this before submitting" 01:39:42 is there currenlty anyway for a person who has submitted the test to see their submissions on launchpad? 01:40:16 that's one reason, the second is that since the UF website needs some love, there may have been a bit of "drift" into the tests checkbox submits and performs, the UF site has not been updated as checkbox whitelists (sets of tests to run) change 01:40:23 those would be the two main reasons 01:40:36 cprofitt: there's a cryptic url that you an access to see your submissions, let me find it 01:40:53 so, it sounds like the key is giving the user some feedback as to if their device failed a core test... and also allowing for them to resubmit 01:40:53 cprofitt: https://launchpad.net/~/+hwdb-submissions 01:41:37 cprofitt: yep, again, something that's a bit difficult to do with the current checkbox architecture which is very linear :( think of the difference between linear and non-linear video editing 01:41:47 in a sense, with checkbox we can only ffwd, rewind, and look at the results at the very end 01:41:58 * cprofitt nods 01:42:00 with no/little chance of going back to a specific test. 01:42:08 exactly... 01:42:28 it is also my understanding there is no ability to 'pause' a test and come back and pick up where you left off 01:42:49 there is, but it's very awkward heheh 01:42:51 roadmr: is there currently any team that reviews the failed submissions? 01:43:07 what you do basically is murder checkbox, when you restart it does its best to pick up where it left off 01:43:30 it gives you three choices: rerun the last test, skip to the next one (in case the last test outright crashed the system), or start anew 01:43:42 * cprofitt nods 01:43:50 the message is quite terse and the choices are somewhat ambiguous, so most people are/will be confused by this 01:44:36 cprofitt: reviewing failed submissions, not really, no. It can be done but pretty much the only person who can and knows how to do it is jedimike, by looking at how the UF results importer processed a submission 01:44:48 he'd have more specifics on how to do it, but it's done on an ad-hoc basis 01:45:00 roadmr: thanks 01:45:07 np :) 01:45:14 I asked because I saw that as part of the launchpad team description 01:45:27 does anyone else have any questions for roadmr? 01:46:19 I would like to set an initial priority for the team moving forward 01:46:33 I think that we have two high value items 01:47:12 1. Improving the UF website process so that there is some feedback to the person who submitted the system as to the fact the results were processed but failed 01:47:35 2. Documenting the current process - so others can understand it as we start to look at improving it 01:47:42 are there any other items people see? 01:48:47 roadmr: would you be willing to procude a flow chart of what you described that we could them use on the wiki? 01:48:57 just a rough outline of how the process currently flows? 01:49:08 cprofitt: sure 01:49:22 cprofitt: there's some documentation (an outline plus more detailed links) here: https://friendly.ubuntu.com/what-is-ubuntu-friendly/ 01:49:23 #action roadmr will produce a flow chart of the current process 01:49:23 * meetingology roadmr will produce a flow chart of the current process 01:49:53 #vote agree that improving feedback on submissions is important in moving foward 01:49:53 Please vote on: agree that improving feedback on submissions is important in moving foward 01:49:53 Public votes can be registered by saying +1, +0 or -1 in channel, (private votes don't work yet, but when they do it will be by messaging the channel followed by +1/-1/+0 to me) 01:49:57 +1 01:49:57 +1 received from cprofitt 01:50:12 +1 definitely heh 01:50:12 +1 definitely heh received from roadmr 01:50:33 +1 01:50:33 +1 received from chilicuil 01:50:35 +1 01:50:35 +1 received from SergioMeneses 01:50:44 pleia2: ? 01:50:48 phillw: ? 01:50:56 +1 01:50:56 +1 received from phillw 01:51:05 sorry wasnm't sure if i was to vote :) 01:51:23 #endvote 01:51:23 Voting ended on: agree that improving feedback on submissions is important in moving foward 01:51:23 Votes for:5 Votes against:0 Abstentions:0 01:51:23 Motion carried 01:52:01 I think for now we should move towards that; after we get that feedback improved we need to look to make some improvements to checkbox as roadmr mentioned 01:52:16 any other topics, questions or thoughts? 01:52:37 o/ 01:52:42 yes, roadmr 01:53:00 when is our next meeting? 01:53:10 hehe, one thing that we *can* potentially do with current checkbox is make it submit directly to a (possibly updated) UF application, so feedback is quicker 01:53:33 this needs writing a "plugin" and an accompanying web application, so it would need some coordination 01:53:33 +1 roadmr that should be explored 01:53:51 we're still on time to make that happen before Ubuntu Feature Freeze, but we'd have to start working soon 01:54:13 * cprofitt nods 01:54:24 ideally we sould ask jedimike if there's a way to submit directly to the UF application, because we may not have a lot of manpower to come up with a new one 01:54:31 sould/should 01:54:46 SergioMeneses: I would like us to meet again in two weeks -- I think to be fair to some of the UTC 01:54:49 that may be a quick way to reduce the decoupling between submission and actual result processing 01:55:22 folks we should have the meeting earlier; though that would require a different person to chair the meeting 01:55:36 perfect 01:55:41 roadmr: I know jedimike has spoken to me about that so I will try to touch bases with him about that 01:55:56 are any of you in the UTC 0/+1 TZ? 01:57:01 'im on utc 01:57:01 alright... I will send a message to the list and see if I can get a person in that TZ to chair a meeting. 01:57:02 cprofitt: I'm EST (UTC-5) 01:57:19 phillw: would you be willing to chair a meeting on the 19th? 01:57:27 roadmr: I am on your TZ as well 01:57:33 I can if no one steps forward :) 01:57:49 Im UTC-5 01:57:55 phillw: ok, I will send the request to the list and you will be the fall back 01:58:36 #action next meeting February 19th UTC 17:00 - cprofitt will send a message to the list to get a chair 01:58:36 * meetingology next meeting February 19th UTC 17:00 - cprofitt will send a message to the list to get a chair 01:58:36 what time on 19th are you thinking of? 01:59:05 yeah, 1700 utc would be okay if no one steps forward. 01:59:09 17 utc 01:59:12 * cprofitt nods 01:59:20 yes, SergioMeneses so noon our time 01:59:52 cprofitt, it's not problem for me 01:59:58 thanks SergioMeneses phillw pleia2 chilicuil roadmr for attending. I think this has been very productive for an initial start 02:00:04 #endmeeting