Love.Law.Robots. by Ang Hou Fu

docassemble

I share my experience and process of generating daily newsletters from Singapore Law Watch using ChatGPT, serverless functions and web technologies. #Featured

Feature Photo by Food Photographer / Unsplash

Introduction

It's easy to be impressed with Large Language Models like #ChatGPT and GPT-4. They're really helpful and fun to use. There are a ton of uses if you're creative. For weeks I was mesmerised by the possibilities — this is the prompt I would use. Using this and that, I can do something new.

When I got serious, I found a particular itch to scratch. I wanted to create a product. Something created by AI that people can enjoy. It has to work, be produced quickly and have room to grow if it is promising. It should also be unique and not reproduce anything currently provided by LawNet or others.🤔

There is one problem which I felt I could solve easily. I've religiously followed Singapore Law Watch for most of my working life. It's generally useful as a daily update on Singapore's most relevant legal news. There's a lot of material to read daily, so you have to scan the headlines and the description. During busier days, I left more things out.

So… what can ChatGPT do? It can read the news articles for me, summarise them, and then make a summary report out of all of them. This is different from scanning headlines because, primarily, the AI has read the whole article. Hopefully, it can provide better value than a list of articles.

Read more...

Feature image

If you’re interested in technology, you will confront this question at some point: should I learn to code?

For many people, including lawyers, coding is something you can ignore without serious consequences. I don’t understand how my microwave oven works, but that will not stop me from using it. Attending that briefing and asking the product guys good questions is probably enough for most lawyers to do your work.

The truth, though, is that life is so much more. In the foreword of the book “Law and Technology in Singapore”, Chief Justice Sundaresh Menon remarked that technology today “permeates, interfaces with, and underpins all aspects of the legal system, and indeed, of society”.

I felt that myself during the pandemic when I had to rely on my familiarity with technology to get work done. Coincidentally, I also implemented my docassemble project at work, using technology to generate contracts 24/7. I needed all my coding skills to whip up the program and provide the cloud infrastructure to run it without supervision. It’s fast, easy to use and avoids many problems associated with do-it-yourself templates. I got my promotion and respect at work.

If you’re convinced that you need to code, the rest of this post contains tips on juggling coding and lawyering. They are based on my personal experiences, so I am also interested in how you’ve done it and any questions you might have.

Tip 1: Have realistic ambitions

Photo by Lucas Clara / Unsplash

Lawyering takes time and experience to master. Passing the bar is the first baby step to a lifetime of learning. PQE is the currency of a lawyer in the job market.

Well, guess what? Coding is very similar too!

There are many options and possibilities — programming languages, tools and methods. Unlike a law school degree, there are free options you can check out, which would give you a good foundation. (Learnpython for Python and W3Schools for the web come to mind.) I got my first break with Udemy, and if you are a Singaporean, you can make use of SkillsFuture Credits to make your online learning free.

Just as becoming a good lawyer is no mean feat, becoming a good coder needs a substantial investment of time and learning. When you are already a lawyer, you may not have enough time in your life to be as good a coder.

I believe the answer is a strong no. Lawyers need to know what is possible, not how to do it. Lawyers will never be as good as real, full-time coders. Why give them another thing to thing the are “special” at. Lawyers need to learn to collaborate with those do code. https://t.co/3EsPbnikzK

— Patrick Lamb (@ElevateLamb) September 9, 2022

So, this is my suggestion: don’t aim to conquer programming languages or produce full-blown applications to rival a LegalTech company you’ve always admired on your own. Focus instead on developing proof of concepts or pushing the tools you are already familiar with as far as you can go. In addition, look at no code or low code alternatives to get easy wins.

By limiting the scope of your ambitions, you’d be able to focus on learning the things you need to produce quick and impactful results. The reinforcement from such quick wins would improve your confidence in your coding skills and abilities.

There might be a day when your project has the makings of a killer app. When that time comes, I am sure that you will decide that going solo is not only impossible but also a bad idea as well. Apps are pretty complex today, so I honestly think it’s unrealistic to rely on yourself to make them.

Tip 2: Follow what interests you

Muddy HandsPhoto by Sandie Clarke / Unsplash

It’s related to tip 1 — you’d probably be able to learn faster and more effectively if you are doing things related to what you are already doing. For lawyers, this means doing your job, but with code. A great example of this is docassemble, which is an open-source system for guided interviews and document assembly.

When you do docassemble, you would try to mimic what you do in practice. For example, crafting questions to get the information you need from a client to file a document or create a contract. However, instead of interviewing a person directly, you will be doing this code.

In the course of my travails looking for projects which interest me, I found the following interesting:

  • Rules as Code: I found Blawx to be the most user-friendly way to get your hands dirty on the idea that legislation, codes and regulations can be code.
  • Docassemble: I mentioned this earlier in this tip
  • Natural Language Processing: Using Artificial Intelligence to process text will lead you to many of the most exciting fields these days: summarisation, search and question and answer. Many of these solutions are fascinating when used for legal text.

I wouldn’t suggest that law is the only subject that lawyers find interesting. I have also spent time trying to create an e-commerce website for my wife and getting a computer to play Monopoly Junior 5 million times a day.

Such “fun” projects might not have much relevance to your professional life, but I learned new things which could help me in the future. E-commerce websites are the life of the internet today, and I experiment with the latest cloud technologies. Running 5 million games in a day made me think harder about code performance and how to achieve more with a single computer.

Tip 3: Develop in the open

Waiting for the big show...Photo by Barry Weatherall / Unsplash

Not many people think about this, so please hang around.

When I was a kid, I had already dreamed of playing around with code and computers. In secondary school, a bunch of guys would race to make the best apps in the class (for some strange reason, they tend to revolve around computer games). I learned a lot about coding then.

As I grew up and my focus changed towards learning and building a career in law, my coding skills deteriorated rapidly. One of the obvious reasons is that I was doing something else, and working late nights in a law firm or law school is not conducive to developing hobbies.

I also found community essential for maintaining your coding skills and interest. The most straightforward reason is that a community will help you when encountering difficulties in your nascent journey as a coder. On the other hand, listening and helping other people with their coding problems also improves your knowledge and skills.

The best thing about the internet is that you can find someone with similar interests as you — lawyers who code. On days when I feel exhausted with my day job, it’s good to know that someone out there is interested in the same things I am interested in, even if they live in a different world. So it would be best if you found your tribe; the only way to do that is to develop in the open.

  • Get your own GitHub account, write some code and publish it. Here's mine!
  • Interact on social media with people with the same interests as you. You’re more likely to learn what’s hot and exciting from them. I found Twitter to be the most lively place. Here's mine!
  • Join mailing lists, newsletters and meetups.

I find that it’s vital to be open since lawyers who code are rare, and you have to make a special effort to find them. They are like unicorns🦄!

Conclusion

So, do lawyers need to code? To me, you need a lot of drive to learn to code and build a career in law in the meantime. For those set on setting themselves apart this way, I hope the tips above can point the way. What other projects or opportunities do you see that can help lawyers who would like to code?

#Lawyers #Programming #LegalTech #blog #docassemble #Ideas #Law #OpenSource #RulesAsCode

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Introduction

You must have worked hard to get here! We are almost at the end now.

Our journey took us from providing the user experience, figuring out what should happen in the background, and interacting with an external service.

In this part, we ask docassemble to provide a file for the user to download.

Provisioning a File

When we left part 2, this was our result screen.

    event: final_screen
    question: |
      Download your sound file here.
    subquestion: |
      The audio on your text has been generated.
      
      You can preview it here too.
      
      <audio controls>
       <source type="audio/mpeg">
       Your browser does not support playing audio.
      </audio>
      
      Press `Back` above if you want to modify the settings and generate a new file,
      or click `Restart` below to begin a new request.
    buttons:
      - Exit: exit
      - Restart: restart

There are two places where you need an audio file.

  1. In the “question”, a link to the file is provided in “here” for download.
  2. The audio preview widget (the thing which you click to play) also needs a link to the file to function.

Luckily for us, docassemble provides a straightforward way to deal with files on the server. Simply stated, create a variable of type DAFile to hold a reference to the file, save the data to the file and then use it for those links in the results screen.

Let’s get started. Add this block to your main.yml file.

    ---
    objects:
      - generated: DAFile
    ---

This block creates an object called “generated”, which is a DAFile. Now your interview code can use “generated”.

Add the new line in the mandatory block we created in Part 2.

    mandatory: True
    code: |
    # The next line is new
      generated.initialize(filename="output.mp3") 
      tts_task
      if tts_task.ready():
        final_screen
      else:
        waiting_screen

This code initialises “generated” by getting the docassemble server to provision it. If you use “generated” before initialising it, docassemble raises an error. 👻 (You will only get this error if you use the DAFile to create a file)

Now your background action needs access to “generated”. Pass it in a keyword parameter in the background action you created in Part 3.

    code: |
      tts_task = background_action(
        'bg_task', 
        text_to_synthesize=text_to_synthesize, 
        voice=voice, 
        speaking_rate=speaking_rate, 
        pitch=pitch,
    # This is the new keyword parameter
        file=generated  
      )

Now that your background action has the file, use it to save the audio content. Add the new lines below to bg_task that you also created in Part 3.

    event: bg_task
    code: |
      audio = get_text_to_speech(
        action_argument('text_to_synthesize'),
        action_argument('voice'),
        action_argument('speaking_rate'),
        action_argument('pitch'),
      )
    # The next three lines are new
      file_output = action_argument('file') 
      file_output.write(audio, binary=True) 
      file_output.commit() 
      background_response()

We assign the file to a new variable in the background task and then use it to write the audio (make sure it is in binary format as MP3s are not text). After that, commit the file to save it in the server or your external storage, depending on your configuration. (The above method are from DAFile. You can read more details about what they do and other methods here.)

Now that the file is ready, we can plunk it into our results screen. We are providing URLs here so that your user can download them from the browser. If you used paths, that would not work because it is the server's file system. Modify the lines in the results screen block.

    event: final_screen
    question: |
    # Modify the next line
      Download your sound file **[here](${generated.url_for(attachment=True)}).** 
    subquestion: |
      The audio on your text has been generated.
      
      You can preview it here too.
      
      <audio controls>
    # Modify the next line
       <source src="${generated.url_for()}" type="audio/mpeg"> 
       Your browser does not support playing audio.
      </audio>
      
      Press `Back` above if you want to modify the settings and generate a new file,
      or click `Restart` below to begin a new request.
    buttons:
      - Exit: exit
      - Restart: restart

To get the URL for a DAFIle, use the url_for method. This lets you have an address you can use for downloading or the web browser.

Conclusion

Congratulations! You are now ready to run the interview. Give it a go and see if you can download the audio of a text you would like spoken. (If you are still at the Playground, you can click “Save and Run” to ensure your work is safe and test it around a bit.)

This Text to Speech docassemble interview is relatively straightforward to me. Nevertheless, its simplicity also showcases several functions which you may want to be familiar with. Hopefully, you now have an idea of dealing with external services. If you manage to hook up something interesting, please share it with the community!

Bonus: Trapping errors and alerting the users

The code so far is enough to provide users with hours of fun (hopefully not at your expense). However, there are edge cases which you should consider if you plan to make your interview more widely available.

Firstly, while it's pretty clear in this tutorial that you should have updated your Configuration so that this interview can find your service account, this doesn't always happen for others. Admins might have overlooked it.

Add this code as the first mandatory code block of main.yml (before the one we wrote in Part 3):

    mandatory: True
    code: |
      if get_config('google') is None or 'tts service account' not in get_config('google'):   
        if get_config('error notification email') is not None:
          send_email(to=get_config('error notification email'), 
            subject='docassemble-Google TTS raised an error', 
            body='You need to set service account credentials in your google configuration.' )
        else:
          log('docassemble-Google TTS raised an error -- You need to set service account credentials in your google configuration.')
          
        message('Error: No service account for Google TTS', 'Please contact your administrator.')

Take note that if you add more than one mandatory block, they are called in the order of their appearance in the interview file. So if you put this after the mandatory code block defining our processes, the process gets called before checking whether we should run this code in the first place.

This code block does a few things. Firstly it checks whether there is a “google” directive or a “tts service account” directive in the “google directive”. If it doesn't find any tts service account information, it checks whether the admin has set an error notification email in the Configuration. If it does, the server will send an email to the admin email to report the issue. If it doesn't, it prints the error on docassemble.log, one of the logs in the server. (If the admin doesn't check his email or logs, I am unsure how we can help the admin.)

This mandatory check before starting the interview is helpful to catch the most obvious error – no configuration. However, you can pass this check by putting nonsense in the “tts service account”. Google is not going to process this. There may be other errors, such as Google being offline.

Narrowing down every possible error will be very challenging. Instead, we will make one crucial check: the code did save a file at the end of the process. Even if we aren't going to be able to tell the user what went wrong, at least we spared the user the confusion of finding out that there was no file to download.

First, let's write the code that makes the check. Add this new code block.

    event: file_check
    code: |
      path = generated.path()
      if not os.path.exists(path):
        if get_config('error notification email') is not None:
          send_email(to=get_config('error notification email'), 
            subject='docassemble-Google TTS raised an error', 
            body='No file was saved in this interview.' )
        else:
          log('docassemble-Google TTS raised an error -- No audio file was saved in this interview.')
        message('Error: No audio file was saved', 'We are not sure why. Please try again. If the problem persists, contact your administrator.')

This code checks whether the audio file (generated, a DAFile) is an actual file or an apparition. If it doesn't exist, the admin receives a message. The user is also alerted to the failure.

We would need to add a need directive to our results screen so that the check is made before the final screen to download the file is shown.

    event: final_screen 
    need:  # Add this line
      - file_check  # Add this line
    question: |
      Download your sound file **[here](${generated.url_for(attachment=True)}).**
    subquestion: |
      The audio on your text has been generated.
      
      You can preview it here too.
      
      <audio controls>
       <source src="${generated.url_for()}" type="audio/mpeg">
       Your browser does not support playing audio.
      </audio>
      
      Press `Back` above if you want to modify the settings and generate a new file,
      or click `Restart` below to begin a new request.
    buttons:
      - Exit: exit
      - Restart: restart

We would also need to import the python os standard library to make the check on our system. Add this new block near the top of our main.yml file.

    imports:
      - os.path

There you have it! The interview checks before you start whether there's a service account. It also checks before showing you the final screen whether your request succeeded and if an audio file is ready to download.

👈🏻 Go to the previous part.

☝🏻Return to the overview of this tutorial.

#tutorial #Python #Programming #docassemble #Google #TTS #LegalTech

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Introduction

So far, all our work is on our docassemble install, which has been quite a breeze. Now we come to the most critical part of this tutorial: working with an external service, Google Text to Speech. Different considerations come into play when working with others.

In this part, we will install a client library from Google. We will then configure the setup to interact with Google’s servers and write this code in a separate module, google_tts.py. At the end of this tutorial, your background action will be able to call the function get_text_to_speech and get the audio file from Google.

1. A quick word about APIs

The term “API” can be used loosely nowadays. Some people use it to describe how you can use a program or a software library. In this tutorial, an API refers to a connection between computer programs. Instead of a website or a desktop program, we’re using Python and docassemble to interact with Google Text to Speech. In some cases, like Google Text to Speech, an API is the only way to work with the program.

The two most common ways to work with an API over the internet are (1) using a client library or (2) RESTful APIs. There are pros and cons to working with any one of these options. In this tutorial we are going to go with a client library that Google provided. This allows us to work with the API in a programming language we are familiar with, Python. RESTful APIs can have more support and features than a client language (especially if the programming language is not popular). Still, you’d need to know your way around the requests and similar packages if you want to use them in Python.

2. Install the Client Library in your docassemble

Before we can start using the client library, we need to ensure that it’s there in our docassemble install. Programming in Python can be very challenging because of issues like this:

source: https://imgs.xkcd.com/comics/python_environment.png

Luckily, you will not face this problem if you’re using docker for your docassemble install (which most people do). Do this instead:

  1. Leave the Playground and go to another page called “Package Management”. (If you don’t see this page, you need to be either an admin or a developer)
  2. Under Install or update a package, specify google-cloud-texttospeech as the package to find on PyPI
  3. Click Update, and wait for the screen to show that the install is OK. (This takes time as there are quite a few dependencies to install)
  4. Verify that the google-cloud-texttospeech package has been installed by checking out the list of packages installed.

3. Set up a Text To Speech service account in docassemble

At this point, you should have obtained your Google Cloud Platform service account so that you can access the Text to Speech API. If you haven’t done so, please follow the instructions here. Take note that we will need your key information in JSON format. You don’t need to “Set your authentication environment variable” for this tutorial.

If you have not realised it yet, the key information in JSON format is a secret. While Google’s Text to Speech platform has a generous free tier, the service is not free. So, expect to pay Google if somebody with access to your account tries to read The Lord of the Rings trilogy. In line with best practices, secrets should be kept in a private and secure place, which is not your code repository. Please don’t include your service account details in your playground files!

Luckily, you can store this information in docassemble’s Configuration, which someone can’t access without an admin role and is not generally publicly available. Let’s do that by creating a directive google with a sub-directive of tts service account. Go to your configuration page and add these directives. Then fill out the information in JSON format you received from Google when you set up the service account.

In this example, the lines you will add to the Configuration should look like lines 118 to 131.

4. Putting it all together in the google_tts.py module

Now that our environment is set up, it’s time to create our get_speech_from_text function.

Head back to the Playground, Look for the dropdown titled “Folders”, click it, then select “Modules”.

Look for the editor and rename the file as google_tts.py. This is where you will enter the code to interact with Google Text to Speech. If you recall in part 3, we had left out a function named get_text_to_speech. We were also supposed to feed it with the answers we collected from the interviews we wrote in part 2. Let’s enter the signature of the function now.

    def get_text_to_speech(text_to_synthesize, voice, speaking_rate, pitch):
      //Enter more code here
      return

Since our task is to convert text to speech, we can follow the code in the example provided by Google.

A. Create the Google Text-to-Speech client

Using the Python client library, we can create a client to interact with Google’s service.

We need credentials to use the client to access the service. This is the secret you set up in step 3 above. It’s in docassemble’s configuration, under the directive google with a sub-directive of tts service account. Use docassemble’s get_config to look into your configuration and get the secret tts service account as a JSON.

With the secret to the service account, you can pass it to the class factory function and let it do the work.

    def get_text_to_speech(text_to_synthesize, voice, speaking_rate, pitch):
        from google.cloud import texttospeech
        import json
        from docassemble.base.util import get_config
    
        credential_info = json.loads(get_config('google').get('tts service account'), strict=False)
    
        client = texttospeech.TextToSpeechClient.from_service_account_info(credential_info)

Now that the client is ready with your service account details, let's get some audio.

B. Specify some options and submit the request

The primary function to request Google to turn text into speech is synthesize_speech. The function needs a bunch of stuff — the text to convert, a set of voice options, and options for your audio file. Let’s create some with the answers to the questions in part 2. Add these lines of code to your function.

The text to synthesise:

    input_text = texttospeech.SynthesisInput(text=text_to_synthesize)

The voice options:

    voice = texttospeech.VoiceSelectionParams(
            language_code="en-US",
            name=voice,
        )

The audio options:

    audio_config = texttospeech.AudioConfig(
            audio_encoding=texttospeech.AudioEncoding.MP3,
            speaking_rate=speaking_rate,
            pitch=pitch,
        )

Note that we did not allow all the options to be customised by the user. You can go through the documentation yourself to figure out what options you need or don’t need to worry the user. If you think the user should have more options, you’re free to write your questions and modify the code.

Finally, submit the request and return the audio.

    response = client.synthesize_speech(
            request={"input": input_text, "voice": voice, "audio_config": audio_config}
        )
    
    return response.audio_content

Voila! The client library could call Google using your credentials and get your personalised result.

5. Let’s go back to our interview

Now that you have written your function, it’s time to let our interview know where to find it.

Go back to the playground, and add this new block in your main.yml file.

    ---
    modules:
      - .google_tts
    ---

This block tells the interview that some of our functions (specifically, the get_text_to_speech function) is found in the google_tts module.

Conclusion

At the end of this part, you have written your google_tts.py module and included it in your main.yml. You should also know how to install your python package to docassemble and edit your configuration file.

Well, that leaves us with only one more thing to do. We’ve got our audio content; now we just need to get it to the user. How do we do that? What’s that? DAFile? Find out in the next part.

👉🏻 Go to the final part.

👈🏻 Go back to the previous part.

☝🏻 Check out the overview of this tutorial.

#tutorial #docassemble #LegalTech #Google #TTS #Programming #Python

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Introduction

In Part 2, we managed to write a few questions and a result screen. Even with all that eye candy, you will notice that you can’t run this interview. There is no mandatory block, so docassemble does not know what it needs to do to run the interview. In this part, I will talk about the code block required to run the interview, forming the foundation of our next steps.

1. The backbone of this interview

In ordinary docassemble interviews, your endpoint is a template or a form. For this interview, the endpoint is a sound file. So let’s start with this code block. It tells the reader how the interview will run. Since it is a pretty important block, we should put it near the top of the interview file, maybe right under the meta block. (In this tutorial, the order of blocks does not affect how the interview will run. Or at least until you get to the bonus section.)

    mandatory: True
    code: |
      tts_task
      final_screen

If you recall, the user downloads the audio file in the final screen.

So this mandatory code block asks docassemble to “do” the tts_task and then show the final screen. Now we need to define tts_task, and your interview will be ready to run.

So what should tts_task be? The most straightforward answer is that it is the result of the API call to create the sound file. You can write a code block that gets and returns a sound file and assigns it to tts_task.

Well, don’t write that code block yet.

2. Introducing the background action

If you call an API on the other side of the internet, you should know that many things can happen along the way. For example, it takes time for your request to reach Google, then for Google to process it using its fancy robots, send the result back to your server, and then for your server to send it back to the user. In my experience, Google’s and docassemble’s latency is quite good, but it is still noticeable with large requests.

A user is not supposed to notice that a program lags when the interview runs well. If the user realises that the interview is stuck on the same page for too long, the user might get worried that the interview is broken. The truth is that we are waiting for the file to come back. Get back in your chair and wait for it!

To improve user experience, you should have a waiting screen where you tell the user to hold his horses. While this happens, the interview should work in the background. In this manner, your user is assured everything is well while your interview focuses on getting the file back from Google.

docassemble already provides a mechanism for the docassemble program to carry out background actions. It’s aptly called background_action().

Check out a sample of a background action by reading the first example block (”Return a value”) under the Background processes category. Modify our mandatory code block by following the mandatory code block in the example. It should look like this.

    mandatory: True
    code: |
      tts_task
      if tts_task.ready():
        final_screen
      else:
        waiting_screen

So now we tell docassemble to do the tts_task, our background task. Once it is ready, show the final screen. If the task is not ready, show the waiting screen.

3. Define the background action

Now that we have defined the interview flow, it’s time to do the background action. In the spirit of docassemble, we do this by defining tts_task.

The next code block defines the task. Adapt this example into our interview file as follows.

    code: |
      tts_task = background_action(
        'bg_task', 
        text_to_synthesize=text_to_synthesize, 
        voice=voice, 
        speaking_rate=speaking_rate, 
        pitch=pitch,
      )

So we have defined tts_task as a background action. The background action function has two kinds of arguments.

The first positional argument (“bg_task”) is the name of the code block that the background action should execute in the background.

The other keyword arguments are the information you need to pass to this background action like the text_to_synthesize, voice etc. These options you answered earlier during this interview will now be used for this background action. Defining your variables here in a mandatory block indirectly also ensures that docassemble will look for the answers for these variables before performing this code block.

So why do you need to define all the variables in this way? Don’t forget that the background action is a separate process from the rest of your interview so they don’t share the same variables. To enable these processes to share their information, you pass on the variables from the main interview process to the background action.

4. Perform the background action

We have defined the background action. Now let’s code what happens inside the background action.

The background action is defined in an event called bg_task. Now add a new code block as follows:

    event: bg_task
    code: |
      audio = get_text_to_speech(
        action_argument('text_to_synthesize'),
        action_argument('voice'),
        action_argument('speaking_rate'),
        action_argument('pitch'),
      )
      background_response()

So in this code block, we say that the audio is obtained by calling a function named get_text_to_speech. For get_text_to_speech to produce an audio file, it requires the answers to the questions you asked the user earlier. As a background process, it gets access to the variables you defined earlier through the keywords of the background_action function by calling action_argument.

Once get_text_to_speech is completed, we call background_response(). Calling background_response is important for a background action as it tells docassemble that this is the endpoint for the background action. Make sure you don’t leave your background action without it.

5. Provide a waiting screen

Before we leave the example block for background processes, let’s add the question block that tells the user to wait for their audio file. Find the block which defines waiting_screen, and adapt it for your interview as follows.

    event: waiting_screen
    question: |
      Hang tight.
      Google is doing its magic.
    subquestion: |
      This screen will reload every
      few seconds until the file
      is available.
    reload: True

By adding reload: True to the block, you tell docassemble to refresh the screen every 10 seconds. This helps the user to believe that they only need to be patient and some “magic” is going on somewhere.

Conclusion

In the next part of the tutorial, we will dive into get_text_to_speech. (What else, right?) We will need to call Google’s Text-to-Speech API to do this. If you found it easy to follow the code blocks in this part of the tutorial, we will kick this up a notch — the next file we will be working on ends with a “.py”.

👉🏻 Go ahead to the next part

👈🏻 Go to the previous part

👈🏻 Check out the overview of this tutorial.

#tutorial #docassemble #Programming #Python #Google #TTS

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Introduction

In Part 1, we talked about what we will do and the things you need to follow in this tutorial. Let’s get our hands wet now!

We are going to get the groundwork done by creating four pages. The first page gets the text to be turned into speech. The second page chooses the voice which Google will use to generate the audio. The third page edits some attributes in the production of the audio. The last page is the results page to download the spoken text.

If you are familiar with docassemble, nothing here is exciting, so you can skip this part. If you’re very new to docassemble, this is a gentle way to introduce you to getting started.

1. All projects begin like this

Log in to your docassemble program and go to the Playground. We will be doing most of the work here.

If you’re working from a clean install, your screen probably looks like this.

The default new project in docassemble's Playground.

  1. Let’s change the name of the interview file from test.yml to main.yml.
  2. Delete all the default blocks/text in the interview file. We are going to replace it with the blocks for this project.

You will have a clean main.yml file at the end.

2. I never Meta an Interview like you

I like to start my interview file with a meta block that tells someone about this interview and who made it.

It’s not easy to remember what a meta block looks like every time. You can use the example blocks in the playground to insert template blocks and modify them.

The example blocks also link to the relevant part of the documentation for easy reference. (It’s the blue “View Documentation” button.)

You should also use the example blocks as much as possible when you’re new to docassemble and writing YAML files. If you keep using those example blocks, you will not forget to separate your blocks with --- and you will minimise errors about indents and lists. After some practice (and lots of mistakes), you should be familiar with the syntax of a YAML file.

So, even though the example blocks section is found below the fold, you should not leave home without it.

You can write anything you like in the meta block as it’s a reference for other users. The field title, for example, is shown as the name of the interview on the “Available Interviews” page.

For this project, this is the meta block I used.

    metadata:
      title: |
        Google TTS Interview
      short title: |
        Have Google read your text
      description: |
        This interview produces a sound file based 
        on the text input by the user and other options.
      revision_date: 2022-05-01

3. Let’s write some questions

This is probably the most visual part of the tutorial, so enjoy it!

An easy way to think about question blocks is that they represent a page in your interview. As long as docassemble can find question blocks that answer all the variables it needs to finish the interview, you can organise and write your question block as you prefer.

So, for example, you can add this text box block which asks you to provide the input text. You can find the example text box block under the Fields category. (Putting no label allows the block to appear as if only one variable is set in this question)

    question: |
      Tell me what text you would like Google to voice.
    fields:
      - no label: text_to_synthesize
        input type: area
      - note: |
          The limit is 5000 characters. (Short paragraphs should be fine)

You can also combine several questions on one page like this question for setting the audio options. Using the range slider example block under the Fields category, you can build this block.

    question: |
      Modify the way Google speaks your text.
    subquestion: |
      You can skip this if you don't need any modifications.
    fields:
      - Speaking pitch: pitch
        datatype: range
        min: -20.0
        max: 20.0
        default: 0
      - note: |
          20 means increase 20 semitones from the original pitch. 
          -20 means decrease 20 semitones from the original pitch. 
      - Speaking rate/speed: speaking_rate
        datatype: range
        min: 0.25
        max: 4.0
        default: 1.0
        step: 0.1
      - note: |
          1.0 is the normal native speed supported by the specific voice. 
          2.0 is twice as fast, and 0.5 is half as fast.

Notice that I have set constraints and defaults in this block based on the documentation of the various options. This will help the user avoid pesky and demoralising error messages from the external API by entering unacceptable values.

A common question for a newcomer is how should present a question to a user? You can use a list of choices like the one below. (Build this question using the Radio buttons example block under the Multiple Choice category.)

    question: |
      Choose the voice that Google will use.
    field: voice
    default: en-US-Wavenet-A
    choices:
      - en-US-Wavenet-A
      - en-US-Wavenet-B
      - en-US-Wavenet-C
      - en-US-Wavenet-D
      - en-US-Wavenet-E
      - en-US-Wavenet-F
      - en-US-Wavenet-G
      - en-US-Wavenet-H
      - en-US-Wavenet-I
      - en-US-Wavenet-J
    under: |
      You can preview the voices [here](<https://cloud.google.com/text-to-speech/docs/voices>).

An interesting side question: When do I use a slider or a text entry box?

It depends on the kind of information you want. If you input numbers, the field's datatype should be a number. If you’re making a choice, a list of options works better.

Honestly, it takes some experience to figure out what works best. Think about all the online forms you have experienced and what you liked or did not like. To gain experience quickly, you can experiment by trying different fields in docassemble and asking yourself whether it gets the job done.

4. The Result Screen

Now that you have asked all your questions, it’s time to give your user the answer.

The result screen is shown when Google’s API has processed the user’s request and sent over the mp3 file containing the synthesised speech. In the result screen, you will be able to download the file. It’s also helpful to allow the user to preview the sound file so that the user can go back and modify any options.

    event: final_screen
    question: |
      Download your sound file here.
    subquestion: |
      The audio on your text has been generated.
      
      You can preview it here too.
      
      <audio controls>
       <source type="audio/mpeg">
       Your browser does not support playing audio.
      </audio>
      
      Press `Back` above if you want to modify the settings and generate a new file,
      or click `Restart` below to begin a new request.
    buttons:
      - Exit: exit
      - Restart: restart

Note: This image shows the completed file with links on how to download it. The reference question block above does not contain any links.

You would notice that I used an audio HTML tag in the subquestion to provide my media previewer. Take note that you can use HTML tags in your markdown text if docassemble does not have an option that meets your needs. However, your HTML hack might vary since this is based on the browser, so try to test as much as possible and avoid complex HTML.

Preview: Let’s do some actual coding

If you followed this tutorial carefully, your main.yml will have a meta block, 3 question blocks and one results screen.

There are a few problems now:

  • You cannot run the interview. The main reason is that there’s no “mandatory” block, so docassemble does not know what it needs to execute to finish the job.
  • The results screen does not contain a link to download or a media to preview.
  • We haven’t even asked Google to provide us with a sound file.

In the next part, we will go through the overall logic of the interview and do some actual coding. Once you are ready, head on over there!

👉🏻 Head to the next part.

👈🏻 Go back to the previous part.

☝🏻 Check out the overview of this tutorial.

#tutorial #docassemble #TTS #Google #Python #Programming

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Most people associate docassemble with assembling documents using guided interviews. That’s in the name, right? The program asks a few questions and out pops a completed form, contract or document. However, the documentation makes it quite clear that docassemble can do more:

Though the name emphasizes the document assembly feature, docassemble interviews do not need to assemble a document; they might submit an application, direct the user to other resources on the internet, store user input, interact with APIs, or simply provide the user with information.

In this post, let’s demonstrate how to use docassemble to call an API, get a response and provide it to a user. You can check out the completed code on Github (NB: the git branch I would recommend for following this post is blog. I am actively using this package, so I may add new features to the main branch that I don’t discuss here.)

Problem Statement

I do a lot of internal training on various legal and compliance topics. I think I am a pretty all right speaker, but I have my limitations — I can’t give presentations 24/7, and my performance varies in a particular session. Wouldn’t it be nice if I could give a presentation at any time engagingly and consistently?

I could record my voice, but I did not like the result.

I decided to use a text-to-speech program instead, like the one provided by Google Cloud Platform. I created a computerised version of my speech in the presentation. My audience welcomed this version as it was more engaging than a plain PowerPoint presentation. Staff whose first language was not (Singapore) English also found the voice clear and understandable.

The original code was terminal based. I detailed my early exploits in this blog post last year. The script was great for developing something fast. However, as more of my colleagues became interested in incorporating such speech in their presentations, I needed something more user-friendly.

I already have a docassemble installation at work, so it appears convenient to work on that. The program would have to do the following:

  • Ask the user what text it wants to transform into speech
  • Allow the user to modify some properties of the speech (speed, pitch etc.)
  • Call Google TTS API, grab the sound file and provide it to the user to download

Assumptions

To follow this tutorial, you will need the following:

  • A working docassemble install. You can start up an instance on your laptop by following these instructions.
  • A Google Cloud Platform (GCP) account with a service account enabled for Google TTS. You can follow Google’s instructions here to set one up.
  • Use the Playground provided in docassemble. If you'd like to use an IDE, you can, but I wouldn’t be providing instructions like creating files to follow a docassemble package's directory structure.
  • Some basic knowledge about docassemble. I wouldn’t be going through in detail how to write a block. If you can follow the Hello World example, you should have sufficient knowledge to follow this tutorial.

A Roadmap of this Tutorial

In the next part of this post, I talk about the thinking behind creating this interview and how I got the necessary information (off the web) to make it.

In Part 2, we get the groundwork done by creating four pages. This provides us with a visual idea of what happens in this interview.

In Part 3, I talk about docassemble's background action and why we should use it for this interview. Merging the visual requirements with code gives us a clearer picture of what we need to write.

In Part 4, we work with an external API by using a client library for Python. We install this client library in our docassemble's python environment and write a python module.

In Part 5, we finish the interview by coding the end product: an audio file in the guise of a DAFile. You can run the interview and get your text transformed into speech now! I also give some ideas of what else you might want to do in the project.

Part 1: Familiarise yourself with the requirements

To write a docassemble interview, it makes sense to develop it backwards. In a simple case, you would like docassemble to fill in a form. So you would get a form, figure out its requirements, and then write questions for each requirement.

An API call is not a contract or a form, but your process is the same.

Based on Google’s quickstart, this is the method in the Python library which synthesises speech.

    # Set the text input to be synthesized
        synthesis_input = texttospeech.SynthesisInput(text="Hello, World!")
    
    # Build the voice request, select the language code ("en-US") and the ssml
    # voice gender ("neutral")
        voice = texttospeech.VoiceSelectionParams(
            language_code="en-US", 
            ssml_gender=texttospeech.SsmlVoiceGender.NEUTRAL
        )
    
    # Select the type of audio file you want returned
        audio_config = texttospeech.AudioConfig(
            audio_encoding=texttospeech.AudioEncoding.MP3
        )
    
    # Perform the text-to-speech request on the text input with the selected
    # voice parameters and audio file type
        response = client.synthesize_speech(
            input=synthesis_input, voice=voice, audio_config=audio_config
        )

From this example code, you need to provide the program with the input text (synthesis input), the voice, and audio configuration options to synthesise speech.

That looks pretty straightforward, so you might be tempted to dive into it immediately.

However, I would recommend going through the documents provided online.

  • docassemble provides some of the most helpful documentation, great for varying proficiency levels.
  • Google’s Text To Speech’s documentation is more typical of a product offered by a big tech company. Demos, use cases and guides help you get started quickly. You’re going to have to dig deep to find the one for Python. It receives less love than the other programming languages.

Reading the documentation, especially if you want to use a third-party service, is vital to know what’s available and how to exploit it fully. For example, going through the docs is the best way to find out what docassemble is capable of and learn about existing features — such as transforming a python list of strings into a human-readable list complete with an “and”.

You don’t have to follow the quickstart if it does not meet your use case. Going through the documentation, I figured out that I wanted to give the user a choice of which voice to use rather than letting Google select that for me. Furthermore, audio options like how fast a speaker is will be handy since non-native listeners may appreciate slower speaking. Also, I don’t think I need the user to select a specific file format as mp3s should be fine.

Let’s move on!

This was a pretty short one. I hope I got you curious and excited about what comes next. Continue to the next part, where we get started on a project!

👉🏻 Head to the next part of this tutorial!

#tutorial #docassemble #Python #Programming #TTS #Google

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

There’s a movement brewing between the lines of Twitter and within the deeper reaches of GitHub. Somebody is trying to “open source” contracts. You might have come across the term “open source” when downloading your favourite web browser. Open-source software is free, and it works. Is that what “open source” would mean for contracts?

I liked how Bonterms describes the motivation behind the endeavour:

Look inside the stack of nearly any major cloud application and you’ll find open source code, and lots of it. Developers leverage any existing package they can find before writing a line of code on their own. And they spend hours happily contributing back improvements to the projects they use. Open source has fundamentally transformed software development for the benefit of the entire ecosystem. But, could lawyers do the same? Could you possibly get law firm and in-house lawyers with the relevant domain experience to come together to articulate best practices, collaborate on drafting and then give their work product away for free? Yes, it turns out, you can. You just have to ask and provide a forum for working together and engaging in friendly, detailed debate.

Could time-starved lawyers used to charging by the hour be more like programmers and give what they do for free?

Standards, standards everywhere

Previously I wrote about an “open source” contract — OneNDA. There’s been good news on that front. They transformed themselves into Claustack and came out with oneDPA, backed by PwC and ContractPodAI.

Will oneNDA rule them all?oneNDA, a crowdsourced NDA, says it has standardised the NDA. Cue the sceptic in 3... 2... 1...Love.Law.Robots.HoufuBack when OneNDA first came out, I hesitated to join the “hivemind”. My opinion has improved since.

Other “open source” contracts have sprouted out recently — check out Bonterms and Common Paper.

It’s striking how similar these efforts are — all of them use some “cover page” mechanism to contract and are written by a “committee” of lawyers.

Here’s another similarity: all of them discourage modifying their templates.

You can see this from the particular license chosen by these projects. OneNDA chose CC-BY-ND 4.0 (the ND means no derivatives), and the others chose CC-BY (You might be able to make changes, even for commercial purposes, but you must credit the project when you make changes. How do you do that in a contract? 🤷🏻).

Even if you don’t know the difference between the various Creative Commons licenses, you’d be sufficiently discouraged by the documentation. One of the answers in the OneNDA FAQ is, “Yes, you can do whatever you like with it except actively allow or encourage people to change anything in oneNDA other than the variables.”

After I thought harder about the distinctions, I realised these projects aren’t so much about open source but standardisation. If everyone uses a particular contract, there will be massive benefits to all involved. However, you must agree to its restrictions — You can only modify the variables or the cover page. To use the contract, you must agree to all the choices and tradeoffs made by the project.

Philosophically, I disagree with this sort of standardisation. It’s apropos to introduce some XKCD:

Don’t get me wrong. I’m not going to sneer if I saw a OneNDA in the wild (I haven’t).

But I won’t overestimate the impact of these competing efforts at standardisation. On the one hand, nothing is stopping me from modifying any template. On the other, I don’t get any benefit from adhering to one too.

OneNDA becomes Claustack — now a Community!

There is another aspect of “open source” that these projects might be alluding to. Open source development takes place in an open forum where anyone can contribute — on a mailing list, the GitHub issues page or some Discord server.

This idea that anyone can contribute appears to be anthemic to law. In the open source contracts I covered, all of them highlighted that they are supported or drafted by “experts” in their fields (I am a bit sceptical that someone would call themselves an NDA specialist). Both Common Paper and Bonterms have GitHub repositories for their contracts but don’t appear to accept contributions.

This brings me to Claustack. As mentioned above, it used to be OneNDA only, but now they have created a platform described as “GitHub meets StackOverflow – for lawyers”. The focus is not on the few documents that they are in charge of, but also on others including Bonterms and Common Paper. So, it is now a collection of resources, and a forum for people to provide feedback and suggestions, and at some level, be involved in its development. I liked this iteration better, so I joined up.

A contract standard might sound pointless because there are few, if any, restrictions to ensure you adhere to it. However, if there was a critical mass of users — a community — using, advocating and helping others on it, that is a recipe for conquering the world.

In “Forge your Future with Open Source”, a book about open source and how you can contribute to it, author VM Brassuer writes:

... the most important aspect of free and open source software isn’t the code; it’s the people. Contribution to [free and open source software] is about so much more than simply code, design, or documentation; it’s about participation and community. The licenses make the software available, but the people make the software, and the community supports the people. Remove one piece from this equation, and the entire system falls apart.

The quality of a contract might be important, and the licensing, the design and the cost of adoption are probably important too. But what would keep such a project going would be its people. At that point, more people have a stake in the success of the project, not just its founders or commercial backers.

Building a community wouldn’t be easy...

Although I am cautiously optimistic about how Claustack is turning out, it’s still early days for these open source contracts. More has to be done in order to persuade other folks to contribute and advocate.

My lack of faith probably stems from my experience and observation that open source projects dealing directly with law and lawyers are very few and far in between.

Open Source Legal: The Open Legal DirectoryOpen Source Legal is a central repository and review database of open and open source legal standards, applications, platforms and software libraries. It’s meant to help the legal engineering community track and develop a set of community-driven tools and standards to improve legal service delivery…Open Source LegalYou can check out other open-source projects listed here.

One such project which actually has a community is docassemble. They even have a yearly “DocaCon”. I attended my first last year (when the event was in person it was impossible for me to travel to Boston to attend it), and found a pretty weird tribe. Most of the excitement involved access to justice (A2J) implementations of docassemble, not something you would find in law firms or legal departments. I was excited at an effort to bring testing to docassemble interviews, again, I would never discuss this anywhere else.

In a recent interview on LawNext Podcast, Jonathan Pyle, benevolent dictator of docassemble, said this about his motivations for docassemble:

No, I really like to not make any money off of [docassemble]. It’s because I would really like being able to be honest to other people... I like being able to advise people not to use my code. It’s just so much easier if I could just concentrate on the technology and creating new features and not having to worry about making a living. It’s kind of nice to do something nice in the nights and weekends.

I can’t name another project like this.

Lack of opportunities is not the only problem. Culturally, lawyers seemed to be “trained” not to collaborate with each other.

Being #1 isn’t always a good thing—loneliness among lawyers (296) | Legal EvolutionSuccess as a lawyer can come at the expense of personal relationships. Is it worth the price? Few of my former partners in the global firm where I workedLegal EvolutionTom SharbaughIn this detailed narrative, associates, partners and law students confront loneliness.

Echoes of this also come from a recent interview with Mary O’Carroll on Artificial Lawyer.

If you have three lawyers in a room, and someone has information that can make someone else look good, will they help the other lawyers? Knowledge sharing between lawyers is not incentivised in training programmes. But, in a corporate setting you have to flex that muscle, i.e. collaboration and teamwork. The problem is that lawyers are trained to be the smartest person in the room. They don’t work cross-functionally in law firms. In a company however, every team has to work with every other team across the business.

Building a community for a normal open source project is really difficult. The question when it comes to open source contracts is: do lawyers even want a community?

Conclusion

An early draft of this post started by asking whether calling a contract “open source” is a PR stunt. It’s not fair to cast aspersions on an open source contract being given out for free when the usual course is not to share at all. Even so, one also has to be judicious about the way you spend your own time, something which lawyers are definitely (maybe overly) familiar with. Building an open source community is difficult, but that is what would make such a project sustainable. I’ll be keeping a lookout and hopefully there is a place for someone who wants to contribute.

#Contracts #docassemble #oneNDA #Claustack #Bonterms #CommonPaper #Law #Lawyers #Copyright

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

There's a conclusion on a case in Germany I have been tracking for a few years. It's one of the very few cases I am aware of where lawyers actually challenged LegalTech and if they have succeeded, killed an industry.

You can read more about the case (in English) on Artificial Lawyer. Basically, lawyers in Germany have been taking Wolter Kluwer's Smartlaw to Court for what they claim to be an unauthorised practice of law. They claimed that by providing an expert system, Smartlaw was giving legal advice, which could only be performed by lawyers (which Smartlaw wasn't). The lawyers had a victory in Cologne. However, the highest court in Germany has now confirmed that Smartlaw does not give legal advice, so the legality of Smartlaw and document generating solutions is now quite safe there.

Wolters Kluwer Wins Landmark German DIY Doc Generation CaseWolters Kluwer has finally won a landmark battle in Germany for the right of people and businesses to download ‘DIY legal documents’ without the input of lawyers, in what was a key test…Artificial LawyerartificiallawyerNo longer verboten indeed.

The reasoning of the Federal Court of Justice is interesting. It held that because there was a lack of an individual assessment of the case's legal merits, Smartlaw provided no legal advice. This appears reasonable — when a programmer creates an expert system, he does not have a particular individual in mind and never knows his “client”. Besides the heavy type which usually accompanies such solutions, most consumers know that the system is not a complete replacement to seeing an actual lawyer.

What does this mean in Singapore?

Singapore City SkylinePhoto by Stephanie Yeh / Unsplash

Germany is pretty far in most lawyers' minds in Singapore. The differences in legal traditions suggest this decision would not have much precedential or persuasive value if a similar case were brought here.

The definitions of an unauthorised practice in Singapore are different from Germany. Based on the “I call it as I see it” approach to this issue by the courts, this could easily have gone either way.

The Importance of Being AuthorisedA recent case shows that practising law as an unauthorised person can have serious effects. What does this hold for other people who may be interested in alternative legal services?Love.Law.Robots.HoufuI wrote about unauthorised practices recently in response to a case in Singapore. (Free subscription required)

On the one hand, generating legal documents based on questions I ask a client is the hallmark of a lawyer's work, so this must be a legal practice. On the other hand, one doesn't write expert systems by imagining how to ask a real client—rules before templates before particular fact situations.

Thus, having a business proposition based on document generation, like a super-charged docassemble in Singapore, is still risky in Singapore if it clashes with lawyer regulation.

Nevertheless, the decision is still pretty relevant here.

This is one of the most vivid and realistic real-world examples of challenges to LegalTech concerning lawyer regulation. Most people imagine a robot lawyer to be a pretty advanced expert system. It's always been tempting to draw a line from “robot generating legal documents” to “legal advice” and finally “unauthorised practice”. The Germans have done it, and their highest court has answered definitively that they are not illegal. Even if there is a legally persuasive reason in our jurisprudence, it would be odd if we were an outlier.

Furthermore, while the result is satisfying, the reasoning does not give much comfort. The fine line of an “individualised assessment” might be relevant in Germany's context, but another jurisdiction like Singapore might regard the individualised assessment as necessary to protect consumers. For example, do we really want consumers to go away with printing a document that they execute wrongly? The shocking consequence of a failed will may be too much.

Who wants to do an E-Will?COVID-19 offers an opportunity to relook at one of the oldest instruments in law — wills. Is it enough to make them an electronic transaction?Love.Law.Robots.HoufuAn earlier post considering the electronic execution of wills explains why change is prolonged here (free subscription required).

Honestly, I think our authorities would take the pragmatic approach. There's no denying that such expert systems explore a gap that lawyers don't or can't fulfil. An “individualised assessment” may be too expensive in several cases. Many solutions by the authorities and established parties already deploy such an approach and might be dispensing legal advice such as chatbots, wills and court forms for litigants in person. So, killing off this industry or idea at this point seems pretty far fetched.

However, this outcome might be speaking to the current limitations of an advanced expert system. Docassemble can solve many problems, but it can't do everything. On the one hand, it might take us some time to develop a highly advanced expert system, which probably has to be customised for each industry or jurisdiction. On the other hand, there are also user experience issues (clients are distraught and stressed when it comes to legal matters, and a computer asking questions doesn't seem very empathetic).

Divorce HelperDivorce Helper gives you legal information about divorce in Australia. Divorce Helper is free to use and you do not need to tell us who you are.Divorce HelperI find this to be a pretty good (and pretty) example of an empathetic interview.

It's not impossible that a “robot lawyer” will challenge the status quo. Till then, this legal battle might still have a few more stages in it.

#LegalTech #docassemble #Law #Singapore

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu

Feature image

Everything changed about the pandemic — we now work from home, learn from home, and attend video conferences rather than phone calls or face to face meetings. Some of this is weird, but some of this is even weirder. One example of the latter is that I have become more used to seeing myself on the screen.

What does this mean for the blog? I will be exploring how to make videos to write posts. This is not a change I would have expected in June when I moved to Ghost. (Certainly, Ghost's membership functions have made it easier to convince me that the effort to learn and produce will be worth it.)

I originally believed that video posts take a lot of effort to create but are also lame. (I read faster than I watch someone.) Now, I have come to believe it can be more fun and engaging. So, you might hear my voice and see my face soon. I ain't a handsome fella, so please don't be turned away!

I know how to do it technically, and the equipment should not be difficult to get. I will be experimenting, though, so hang on.

What I am reading now

  • I am a big admirer of Suffolk Law School's LIT Lab. I might be biased because they pervasively use docassemble, a free and open-source LegalTech tool. During the pandemic, they have managed to take docassemble further and write a law review article about their experience. It's an encouraging story about building community and marshalling disparate resources for access to justice (A2J). I believe Open Source was an important factor in its success, so hopefully, it is a blueprint for other labs.

Digital Curb Cuts: Towards an Inclusive Open Forms EcosystemIn this paper we focus on digital curb cuts created during the pandemic: improvements designed to increase accessibility that benefit people beyond the populatiSee all articles by Quinten Steenhuis

  • What in the world is Moneyball? It's a strange story whereby a baseball team followed the data by hiring players based on their statistics rather than traditional indicators like reputation and overachieved. Can this be applied to hiring and retaining law firm associates? Legal Evolution suggests it can, but that's not what's interesting about the story. You will read about the intransigence of law firm leaders in the face of data, and you'd be convinced of the importance of leadership in innovation. This is especially the case where the results can be counterintuitive, upsetting, or confusing to leaders. On the other hand, I am sure a law firm leader will be willing to employ “sabermetrics” to achieve the best team on the cheap.

Moneyball for law firm associates: a 15-year retrospective (257) | Legal EvolutionPretty much everything was a counterintuitive curveball. In April of 2006, more than 15 years ago, I wrote a memo to file that would go on to exert aLegal EvolutionBill Henderson

TechLaw.Fest 2021TechLaw.Fest 2021TechLaw.Fest 2021

  • I have always wondered whether I should get Singapore Corporate Counsel Association membership. Quite frankly, the only benefits I see so far are the self-satisfaction of belonging and the somewhat discounted LawNet subscription. Here's something else to consider: the “First” Technology Law Course in Singapore from SCCA. It looks pretty, but I can't find the module details... oh wait, here it is. Since technology law is prevalent and not well taught in law schools (at least during my time), this will be of interest if you need to pick up some substantive knowledge.

SCCA | CoursesCoursesThe title of the course is “EXECUTIVE COURSE IN TECHNOLOGY LAW FOR IN-HOUSE COUNSEL.”

  • If you think it's ridiculous to cough out nearly $2,000 for a bunch of recorded videos (that's why I'm getting in the video business, baby!), you can wait a little longer for a book. It's coming out in October. The introduction to the book outlining its contents is available if you surrender your personal details. Its coverage is definitely broad, so it's useful for fun reading. That's about the only reason why I would get it. It's the second book in Singapore regarding the substantive legal issues of technology, along with several tomes of books on data protection. I am exhausted. Really.

Postscript

I finally managed to do some housekeeping and write a featured post containing all the content I have worked on for PDPC Decisions. I know it's not easy to find the “journey” on the website, so hopefully, you will have a better experience.

Post Updates

As mentioned above, videos are coming to this blog. I haven't decided exactly what kind of content should be in a video. However, I am sure that any tutorial or long-form video will be a full member's privilege. It takes away the vexing question of whether I have to “lock up” posts to provide value to full members and what kind of posts should be public. As I said, I am experimenting with this model, so I will be changing as I go.

Conclusion

I am using my laptop to obscure the mess on my table.

That's it for this newsletter. Maybe there's a chance you will see me in a video for the next one.

(For curious subscribers, there isn't a “swag” shop for this blog. However, if you would like a shiny sticker on your laptop, you can email me with your details, and I can send one for free to you.)

#Newsletter #COVID-19 #docassemble #TechLawFest #TechnologyLaw #DataScience #Singapore

Author Portrait Love.Law.Robots. – A blog by Ang Hou Fu